00:00:00.000  Started by upstream project "autotest-per-patch" build number 132768
00:00:00.000  originally caused by:
00:00:00.000   Started by user sys_sgci
00:00:00.054  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.055  The recommended git tool is: git
00:00:00.055  using credential 00000000-0000-0000-0000-000000000002
00:00:00.058   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.075  Fetching changes from the remote Git repository
00:00:00.078   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.101  Using shallow fetch with depth 1
00:00:00.101  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.101   > git --version # timeout=10
00:00:00.131   > git --version # 'git version 2.39.2'
00:00:00.131  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.158  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.158   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:04.401   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:04.412   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:04.425  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:04.425   > git config core.sparsecheckout # timeout=10
00:00:04.435   > git read-tree -mu HEAD # timeout=10
00:00:04.450   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:04.472  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:04.472   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:04.577  [Pipeline] Start of Pipeline
00:00:04.591  [Pipeline] library
00:00:04.594  Loading library shm_lib@master
00:00:08.672  Library shm_lib@master is cached. Copying from home.
00:00:08.745  [Pipeline] node
00:00:08.865  Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:00:08.868  [Pipeline] {
00:00:08.879  [Pipeline] catchError
00:00:08.882  [Pipeline] {
00:00:08.901  [Pipeline] wrap
00:00:08.916  [Pipeline] {
00:00:08.926  [Pipeline] stage
00:00:08.928  [Pipeline] { (Prologue)
00:00:09.165  [Pipeline] sh
00:00:10.052  + logger -p user.info -t JENKINS-CI
00:00:10.086  [Pipeline] echo
00:00:10.087  Node: GP11
00:00:10.096  [Pipeline] sh
00:00:10.447  [Pipeline] setCustomBuildProperty
00:00:10.458  [Pipeline] echo
00:00:10.460  Cleanup processes
00:00:10.465  [Pipeline] sh
00:00:10.759  + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:00:10.759  27550 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:00:10.775  [Pipeline] sh
00:00:11.071  ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:00:11.071  ++ grep -v 'sudo pgrep'
00:00:11.071  ++ awk '{print $1}'
00:00:11.071  + sudo kill -9
00:00:11.071  + true
00:00:11.086  [Pipeline] cleanWs
00:00:11.096  [WS-CLEANUP] Deleting project workspace...
00:00:11.096  [WS-CLEANUP] Deferred wipeout is used...
00:00:11.109  [WS-CLEANUP] done
00:00:11.114  [Pipeline] setCustomBuildProperty
00:00:11.128  [Pipeline] sh
00:00:11.418  + sudo git config --global --replace-all safe.directory '*'
00:00:11.555  [Pipeline] httpRequest
00:00:13.455  [Pipeline] echo
00:00:13.457  Sorcerer 10.211.164.20 is alive
00:00:13.467  [Pipeline] retry
00:00:13.469  [Pipeline] {
00:00:13.482  [Pipeline] httpRequest
00:00:13.488  HttpMethod: GET
00:00:13.488  URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.490  Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.495  Response Code: HTTP/1.1 200 OK
00:00:13.495  Success: Status code 200 is in the accepted range: 200,404
00:00:13.495  Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.805  [Pipeline] }
00:00:13.822  [Pipeline] // retry
00:00:13.829  [Pipeline] sh
00:00:14.125  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:14.145  [Pipeline] httpRequest
00:00:14.486  [Pipeline] echo
00:00:14.487  Sorcerer 10.211.164.101 is alive
00:00:14.494  [Pipeline] retry
00:00:14.496  [Pipeline] {
00:00:14.509  [Pipeline] httpRequest
00:00:14.514  HttpMethod: GET
00:00:14.515  URL: http://10.211.164.101/packages/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz
00:00:14.516  Sending request to url: http://10.211.164.101/packages/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz
00:00:14.519  Response Code: HTTP/1.1 404 Not Found
00:00:14.520  Success: Status code 404 is in the accepted range: 200,404
00:00:14.520  Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz
00:00:14.523  [Pipeline] }
00:00:14.542  [Pipeline] // retry
00:00:14.550  [Pipeline] sh
00:00:14.866  + rm -f spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz
00:00:14.883  [Pipeline] retry
00:00:14.885  [Pipeline] {
00:00:14.905  [Pipeline] checkout
00:00:14.913  The recommended git tool is: NONE
00:00:16.723  using credential 00000000-0000-0000-0000-000000000002
00:00:16.732  Wiping out workspace first.
00:00:16.744  Cloning the remote Git repository
00:00:16.746  Honoring refspec on initial clone
00:00:16.762  Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk
00:00:16.774   > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk # timeout=10
00:00:16.809  Using reference repository: /var/ci_repos/spdk_multi
00:00:16.810  Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk
00:00:16.810   > git --version # timeout=10
00:00:16.813   > git --version # 'git version 2.45.2'
00:00:16.814  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:16.820  Setting http proxy: proxy-dmz.intel.com:911
00:00:16.821   > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/04/25504/10 +refs/heads/master:refs/remotes/origin/master # timeout=10
00:00:38.150   > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10
00:00:38.156   > git config --add remote.origin.fetch refs/changes/04/25504/10 # timeout=10
00:00:38.161   > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10
00:00:38.619  Avoid second fetch
00:00:38.650  Checking out Revision c4269c6e2cd0445b86aa16195993e54ed2cad2dd (FETCH_HEAD)
00:00:39.154  Commit message: "lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases)"
00:00:39.163  First time build. Skipping changelog.
00:00:38.623   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:38.643   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:38.659   > git config core.sparsecheckout # timeout=10
00:00:38.663   > git checkout -f c4269c6e2cd0445b86aa16195993e54ed2cad2dd # timeout=10
00:00:39.158   > git rev-list --no-walk 961c68b08c66ab95493bd99b2eb21fd28b63039e # timeout=10
00:00:39.169   > git remote # timeout=10
00:00:39.173   > git submodule init # timeout=10
00:00:39.233   > git submodule sync # timeout=10
00:00:39.280   > git config --get remote.origin.url # timeout=10
00:00:39.290   > git submodule init # timeout=10
00:00:39.336   > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10
00:00:39.340   > git config --get submodule.dpdk.url # timeout=10
00:00:39.344   > git remote # timeout=10
00:00:39.349   > git config --get remote.origin.url # timeout=10
00:00:39.353   > git config -f .gitmodules --get submodule.dpdk.path # timeout=10
00:00:39.367   > git config --get submodule.intel-ipsec-mb.url # timeout=10
00:00:39.371   > git remote # timeout=10
00:00:39.376   > git config --get remote.origin.url # timeout=10
00:00:39.381   > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10
00:00:39.385   > git config --get submodule.isa-l.url # timeout=10
00:00:39.390   > git remote # timeout=10
00:00:39.394   > git config --get remote.origin.url # timeout=10
00:00:39.399   > git config -f .gitmodules --get submodule.isa-l.path # timeout=10
00:00:39.404   > git config --get submodule.ocf.url # timeout=10
00:00:39.409   > git remote # timeout=10
00:00:39.414   > git config --get remote.origin.url # timeout=10
00:00:39.419   > git config -f .gitmodules --get submodule.ocf.path # timeout=10
00:00:39.423   > git config --get submodule.libvfio-user.url # timeout=10
00:00:39.426   > git remote # timeout=10
00:00:39.430   > git config --get remote.origin.url # timeout=10
00:00:39.434   > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10
00:00:39.438   > git config --get submodule.xnvme.url # timeout=10
00:00:39.441   > git remote # timeout=10
00:00:39.445   > git config --get remote.origin.url # timeout=10
00:00:39.448   > git config -f .gitmodules --get submodule.xnvme.path # timeout=10
00:00:39.452   > git config --get submodule.isa-l-crypto.url # timeout=10
00:00:39.455   > git remote # timeout=10
00:00:39.459   > git config --get remote.origin.url # timeout=10
00:00:39.462   > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10
00:00:39.478  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:39.478  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:39.479  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:39.479  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:39.479  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:39.479  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:39.479  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:39.499  Setting http proxy: proxy-dmz.intel.com:911
00:00:39.499  Setting http proxy: proxy-dmz.intel.com:911
00:00:39.499  Setting http proxy: proxy-dmz.intel.com:911
00:00:39.499   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10
00:00:39.499   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10
00:00:39.499  Setting http proxy: proxy-dmz.intel.com:911
00:00:39.499   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10
00:00:39.500   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10
00:00:39.500  Setting http proxy: proxy-dmz.intel.com:911
00:00:39.500  Setting http proxy: proxy-dmz.intel.com:911
00:00:39.500   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10
00:00:39.500   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10
00:00:39.500  Setting http proxy: proxy-dmz.intel.com:911
00:00:39.500   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10
00:00:51.157  [Pipeline] dir
00:00:51.158  Running in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:00:51.160  [Pipeline] {
00:00:51.175  [Pipeline] sh
00:00:51.471  ++ nproc
00:00:51.471  + threads=48
00:00:51.471  + git repack -a -d --threads=48
00:00:58.064  + git submodule foreach git repack -a -d --threads=48
00:00:58.064  Entering 'dpdk'
00:01:08.063  Entering 'intel-ipsec-mb'
00:01:08.063  Entering 'isa-l'
00:01:08.063  Entering 'isa-l-crypto'
00:01:08.063  Entering 'libvfio-user'
00:01:08.063  Entering 'ocf'
00:01:08.063  Entering 'xnvme'
00:01:08.638  + find .git -type f -name alternates -print -delete
00:01:08.638  .git/objects/info/alternates
00:01:08.638  .git/modules/libvfio-user/objects/info/alternates
00:01:08.638  .git/modules/isa-l-crypto/objects/info/alternates
00:01:08.638  .git/modules/intel-ipsec-mb/objects/info/alternates
00:01:08.638  .git/modules/ocf/objects/info/alternates
00:01:08.638  .git/modules/dpdk/objects/info/alternates
00:01:08.638  .git/modules/xnvme/objects/info/alternates
00:01:08.638  .git/modules/isa-l/objects/info/alternates
00:01:08.650  [Pipeline] }
00:01:08.668  [Pipeline] // dir
00:01:08.673  [Pipeline] }
00:01:08.689  [Pipeline] // retry
00:01:08.698  [Pipeline] sh
00:01:08.990  + hash pigz
00:01:08.990  + tar -cf spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz -I pigz spdk
00:01:09.577  [Pipeline] retry
00:01:09.579  [Pipeline] {
00:01:09.594  [Pipeline] httpRequest
00:01:09.601  HttpMethod: PUT
00:01:09.602  URL: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz
00:01:09.610  Sending request to url: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz
00:01:12.235  Response Code: HTTP/1.1 200 OK
00:01:12.242  Success: Status code 200 is in the accepted range: 200
00:01:12.246  [Pipeline] }
00:01:12.265  [Pipeline] // retry
00:01:12.299  [Pipeline] echo
00:01:12.301  
00:01:12.301  Locking
00:01:12.301  Waited 0s for lock
00:01:12.301  File already exists: /storage/packages/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz
00:01:12.301  
00:01:12.306  [Pipeline] sh
00:01:12.608  + git -C spdk log --oneline -n5
00:01:12.608  c4269c6e2 lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases)
00:01:12.608  75bc78f30 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata)
00:01:12.608  b67dc21ec lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process)
00:01:12.608  c0f3f2d18 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata)
00:01:12.608  7ab149b9a lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process)
00:01:12.637  [Pipeline] }
00:01:12.650  [Pipeline] // stage
00:01:12.655  [Pipeline] stage
00:01:12.657  [Pipeline] { (Prepare)
00:01:12.666  [Pipeline] writeFile
00:01:12.676  [Pipeline] sh
00:01:12.996  + logger -p user.info -t JENKINS-CI
00:01:13.071  [Pipeline] sh
00:01:13.356  + logger -p user.info -t JENKINS-CI
00:01:13.367  [Pipeline] sh
00:01:13.642  + cat autorun-spdk.conf
00:01:13.642  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:13.642  SPDK_TEST_NVMF=1
00:01:13.642  SPDK_TEST_NVME_CLI=1
00:01:13.642  SPDK_TEST_NVMF_TRANSPORT=tcp
00:01:13.642  SPDK_TEST_NVMF_NICS=e810
00:01:13.642  SPDK_TEST_VFIOUSER=1
00:01:13.642  SPDK_RUN_UBSAN=1
00:01:13.642  NET_TYPE=phy
00:01:13.649  RUN_NIGHTLY=0
00:01:13.653  [Pipeline] readFile
00:01:13.680  [Pipeline] withEnv
00:01:13.682  [Pipeline] {
00:01:13.694  [Pipeline] sh
00:01:13.980  + set -ex
00:01:13.980  + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]]
00:01:13.980  + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:01:13.980  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:13.980  ++ SPDK_TEST_NVMF=1
00:01:13.980  ++ SPDK_TEST_NVME_CLI=1
00:01:13.980  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:01:13.980  ++ SPDK_TEST_NVMF_NICS=e810
00:01:13.980  ++ SPDK_TEST_VFIOUSER=1
00:01:13.980  ++ SPDK_RUN_UBSAN=1
00:01:13.980  ++ NET_TYPE=phy
00:01:13.980  ++ RUN_NIGHTLY=0
00:01:13.980  + case $SPDK_TEST_NVMF_NICS in
00:01:13.980  + DRIVERS=ice
00:01:13.980  + [[ tcp == \r\d\m\a ]]
00:01:13.980  + [[ -n ice ]]
00:01:13.980  + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4
00:01:13.980  rmmod: ERROR: Module mlx4_ib is not currently loaded
00:01:17.269  rmmod: ERROR: Module irdma is not currently loaded
00:01:17.269  rmmod: ERROR: Module i40iw is not currently loaded
00:01:17.269  rmmod: ERROR: Module iw_cxgb4 is not currently loaded
00:01:17.269  + true
00:01:17.269  + for D in $DRIVERS
00:01:17.269  + sudo modprobe ice
00:01:17.269  + exit 0
00:01:17.278  [Pipeline] }
00:01:17.287  [Pipeline] // withEnv
00:01:17.291  [Pipeline] }
00:01:17.299  [Pipeline] // stage
00:01:17.305  [Pipeline] catchError
00:01:17.306  [Pipeline] {
00:01:17.314  [Pipeline] timeout
00:01:17.315  Timeout set to expire in 1 hr 0 min
00:01:17.316  [Pipeline] {
00:01:17.325  [Pipeline] stage
00:01:17.327  [Pipeline] { (Tests)
00:01:17.339  [Pipeline] sh
00:01:17.627  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:01:17.627  ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:01:17.627  + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest
00:01:17.627  + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]]
00:01:17.627  + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:01:17.627  + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output
00:01:17.627  + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]]
00:01:17.627  + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]]
00:01:17.627  + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output
00:01:17.627  + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]]
00:01:17.627  + [[ nvmf-tcp-phy-autotest == pkgdep-* ]]
00:01:17.627  + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:01:17.627  + source /etc/os-release
00:01:17.627  ++ NAME='Fedora Linux'
00:01:17.627  ++ VERSION='39 (Cloud Edition)'
00:01:17.627  ++ ID=fedora
00:01:17.627  ++ VERSION_ID=39
00:01:17.627  ++ VERSION_CODENAME=
00:01:17.627  ++ PLATFORM_ID=platform:f39
00:01:17.627  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:01:17.627  ++ ANSI_COLOR='0;38;2;60;110;180'
00:01:17.627  ++ LOGO=fedora-logo-icon
00:01:17.627  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:01:17.627  ++ HOME_URL=https://fedoraproject.org/
00:01:17.627  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:01:17.627  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:01:17.627  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:01:17.627  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:01:17.627  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:01:17.627  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:01:17.627  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:01:17.627  ++ SUPPORT_END=2024-11-12
00:01:17.627  ++ VARIANT='Cloud Edition'
00:01:17.627  ++ VARIANT_ID=cloud
00:01:17.627  + uname -a
00:01:17.627  Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:01:17.627  + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status
00:01:18.565  Hugepages
00:01:18.565  node     hugesize     free /  total
00:01:18.565  node0   1048576kB        0 /      0
00:01:18.565  node0      2048kB        0 /      0
00:01:18.565  node1   1048576kB        0 /      0
00:01:18.565  node1      2048kB        0 /      0
00:01:18.565  
00:01:18.565  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:18.565  I/OAT                     0000:00:04.0    8086   0e20   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:00:04.1    8086   0e21   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:00:04.2    8086   0e22   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:00:04.3    8086   0e23   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:00:04.4    8086   0e24   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:00:04.5    8086   0e25   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:00:04.6    8086   0e26   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:00:04.7    8086   0e27   0       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.0    8086   0e20   1       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.1    8086   0e21   1       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.2    8086   0e22   1       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.3    8086   0e23   1       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.4    8086   0e24   1       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.5    8086   0e25   1       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.6    8086   0e26   1       ioatdma          -          -
00:01:18.565  I/OAT                     0000:80:04.7    8086   0e27   1       ioatdma          -          -
00:01:18.565  NVMe                      0000:88:00.0    8086   0a54   1       nvme             nvme0      nvme0n1
00:01:18.565  + rm -f /tmp/spdk-ld-path
00:01:18.565  + source autorun-spdk.conf
00:01:18.565  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:18.565  ++ SPDK_TEST_NVMF=1
00:01:18.565  ++ SPDK_TEST_NVME_CLI=1
00:01:18.565  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:01:18.565  ++ SPDK_TEST_NVMF_NICS=e810
00:01:18.565  ++ SPDK_TEST_VFIOUSER=1
00:01:18.565  ++ SPDK_RUN_UBSAN=1
00:01:18.565  ++ NET_TYPE=phy
00:01:18.565  ++ RUN_NIGHTLY=0
00:01:18.565  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:18.825  + [[ -n '' ]]
00:01:18.825  + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:01:18.825  + for M in /var/spdk/build-*-manifest.txt
00:01:18.825  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:01:18.825  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/
00:01:18.825  + for M in /var/spdk/build-*-manifest.txt
00:01:18.825  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:18.825  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/
00:01:18.825  + for M in /var/spdk/build-*-manifest.txt
00:01:18.825  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:18.825  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/
00:01:18.825  ++ uname
00:01:18.825  + [[ Linux == \L\i\n\u\x ]]
00:01:18.825  + sudo dmesg -T
00:01:18.825  + sudo dmesg --clear
00:01:18.825  + dmesg_pid=30090
00:01:18.825  + [[ Fedora Linux == FreeBSD ]]
00:01:18.825  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:18.825  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:18.825  + sudo dmesg -Tw
00:01:18.825  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:18.825  + [[ -x /usr/src/fio-static/fio ]]
00:01:18.825  + export FIO_BIN=/usr/src/fio-static/fio
00:01:18.825  + FIO_BIN=/usr/src/fio-static/fio
00:01:18.825  + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:18.825  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:18.825  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:18.825  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:18.825  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:18.825  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:18.825  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:18.825  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:18.825  + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:01:18.825    03:51:47  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:18.825   03:51:47  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy
00:01:18.825    03:51:47  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0
00:01:18.825   03:51:47  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:01:18.825   03:51:47  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:01:18.825     03:51:47  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:18.825    03:51:47  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:01:18.825     03:51:47  -- scripts/common.sh@15 -- $ shopt -s extglob
00:01:18.825     03:51:47  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:18.825     03:51:47  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:18.825     03:51:47  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:18.825      03:51:47  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:18.825      03:51:47  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:18.825      03:51:47  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:18.825      03:51:47  -- paths/export.sh@5 -- $ export PATH
00:01:18.825      03:51:47  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:18.825    03:51:47  -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output
00:01:18.825      03:51:47  -- common/autobuild_common.sh@493 -- $ date +%s
00:01:18.825     03:51:47  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733712707.XXXXXX
00:01:18.825    03:51:47  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733712707.HHPJBi
00:01:18.825    03:51:47  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:01:18.825    03:51:47  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:01:18.825    03:51:47  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/'
00:01:18.825    03:51:47  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp'
00:01:18.825    03:51:47  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:01:18.825     03:51:47  -- common/autobuild_common.sh@509 -- $ get_config_params
00:01:18.825     03:51:47  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:01:18.825     03:51:47  -- common/autotest_common.sh@10 -- $ set +x
00:01:18.825    03:51:47  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user'
00:01:18.825    03:51:47  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:01:18.825    03:51:47  -- pm/common@17 -- $ local monitor
00:01:18.825    03:51:47  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:18.825    03:51:47  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:18.825    03:51:47  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:18.825     03:51:47  -- pm/common@21 -- $ date +%s
00:01:18.825    03:51:47  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:18.825    03:51:47  -- pm/common@25 -- $ sleep 1
00:01:18.825     03:51:47  -- pm/common@21 -- $ date +%s
00:01:18.825     03:51:47  -- pm/common@21 -- $ date +%s
00:01:18.825     03:51:47  -- pm/common@21 -- $ date +%s
00:01:18.825    03:51:47  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707
00:01:18.825    03:51:47  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707
00:01:18.825    03:51:47  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707
00:01:18.826    03:51:47  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707
00:01:18.826  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-vmstat.pm.log
00:01:18.826  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-cpu-load.pm.log
00:01:18.826  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-cpu-temp.pm.log
00:01:18.826  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-bmc-pm.bmc.pm.log
00:01:20.204    03:51:48  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:01:20.204   03:51:48  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:20.204   03:51:48  -- spdk/autobuild.sh@12 -- $ umask 022
00:01:20.204   03:51:48  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:01:20.204   03:51:48  -- spdk/autobuild.sh@16 -- $ date -u
00:01:20.204  Mon Dec  9 02:51:48 AM UTC 2024
00:01:20.204   03:51:48  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:20.204  v25.01-pre-316-gc4269c6e2
00:01:20.204   03:51:48  -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:01:20.204   03:51:48  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:20.204   03:51:48  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:20.204   03:51:48  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:20.204   03:51:48  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:20.204   03:51:48  -- common/autotest_common.sh@10 -- $ set +x
00:01:20.204  ************************************
00:01:20.204  START TEST ubsan
00:01:20.204  ************************************
00:01:20.204   03:51:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:01:20.204  using ubsan
00:01:20.204  
00:01:20.204  real	0m0.000s
00:01:20.204  user	0m0.000s
00:01:20.204  sys	0m0.000s
00:01:20.204   03:51:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:20.204   03:51:48 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:01:20.204  ************************************
00:01:20.204  END TEST ubsan
00:01:20.204  ************************************
00:01:20.204   03:51:48  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:01:20.204   03:51:48  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:01:20.204   03:51:48  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:01:20.204   03:51:48  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:01:20.204   03:51:48  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:01:20.204   03:51:48  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:01:20.204   03:51:48  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:01:20.204   03:51:48  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:01:20.204   03:51:48  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared
00:01:20.464  Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk
00:01:20.464  Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:01:21.402  Using 'verbs' RDMA provider
00:01:34.563  Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done.
00:01:44.541  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:01:44.541  Creating mk/config.mk...done.
00:01:44.541  Creating mk/cc.flags.mk...done.
00:01:44.541  Type 'make' to build.
00:01:44.541   03:52:13  -- spdk/autobuild.sh@70 -- $ run_test make make -j48
00:01:44.541   03:52:13  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:44.541   03:52:13  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:44.541   03:52:13  -- common/autotest_common.sh@10 -- $ set +x
00:01:44.541  ************************************
00:01:44.541  START TEST make
00:01:44.541  ************************************
00:01:44.541   03:52:13 make -- common/autotest_common.sh@1129 -- $ make -j48
00:01:44.801  make[1]: Nothing to be done for 'all'.
00:01:47.372  The Meson build system
00:01:47.372  Version: 1.5.0
00:01:47.372  Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user
00:01:47.372  Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug
00:01:47.372  Build type: native build
00:01:47.372  Project name: libvfio-user
00:01:47.372  Project version: 0.0.1
00:01:47.372  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:01:47.372  C linker for the host machine: cc ld.bfd 2.40-14
00:01:47.372  Host machine cpu family: x86_64
00:01:47.372  Host machine cpu: x86_64
00:01:47.372  Run-time dependency threads found: YES
00:01:47.372  Library dl found: YES
00:01:47.372  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:01:47.372  Run-time dependency json-c found: YES 0.17
00:01:47.372  Run-time dependency cmocka found: YES 1.1.7
00:01:47.372  Program pytest-3 found: NO
00:01:47.372  Program flake8 found: NO
00:01:47.372  Program misspell-fixer found: NO
00:01:47.372  Program restructuredtext-lint found: NO
00:01:47.372  Program valgrind found: YES (/usr/bin/valgrind)
00:01:47.372  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:47.372  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:47.372  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:47.372  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:01:47.372  Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh)
00:01:47.372  Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh)
00:01:47.372  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:01:47.372  Build targets in project: 8
00:01:47.372  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:01:47.372   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:01:47.372  
00:01:47.372  libvfio-user 0.0.1
00:01:47.372  
00:01:47.372    User defined options
00:01:47.372      buildtype      : debug
00:01:47.372      default_library: shared
00:01:47.372      libdir         : /usr/local/lib
00:01:47.372  
00:01:47.372  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:48.323  ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug'
00:01:48.323  [1/37] Compiling C object samples/null.p/null.c.o
00:01:48.589  [2/37] Compiling C object samples/lspci.p/lspci.c.o
00:01:48.589  [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:01:48.589  [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:01:48.589  [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:01:48.589  [6/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:01:48.589  [7/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:01:48.589  [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:01:48.589  [9/37] Compiling C object test/unit_tests.p/mocks.c.o
00:01:48.589  [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:01:48.589  [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:01:48.589  [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:01:48.589  [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:01:48.589  [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:01:48.589  [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:01:48.589  [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:01:48.589  [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:01:48.589  [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:01:48.589  [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:01:48.589  [20/37] Compiling C object samples/server.p/server.c.o
00:01:48.589  [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:01:48.589  [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:01:48.589  [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:01:48.589  [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:01:48.589  [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:01:48.589  [26/37] Compiling C object samples/client.p/client.c.o
00:01:48.589  [27/37] Linking target samples/client
00:01:48.851  [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:01:48.851  [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:01:48.851  [30/37] Linking target lib/libvfio-user.so.0.0.1
00:01:48.851  [31/37] Linking target test/unit_tests
00:01:49.116  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:01:49.116  [33/37] Linking target samples/lspci
00:01:49.116  [34/37] Linking target samples/shadow_ioeventfd_server
00:01:49.116  [35/37] Linking target samples/gpio-pci-idio-16
00:01:49.116  [36/37] Linking target samples/server
00:01:49.116  [37/37] Linking target samples/null
00:01:49.116  INFO: autodetecting backend as ninja
00:01:49.116  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug
00:01:49.382  DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug
00:01:49.964  ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug'
00:01:49.964  ninja: no work to do.
00:01:54.159  The Meson build system
00:01:54.159  Version: 1.5.0
00:01:54.159  Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk
00:01:54.159  Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp
00:01:54.159  Build type: native build
00:01:54.159  Program cat found: YES (/usr/bin/cat)
00:01:54.159  Project name: DPDK
00:01:54.159  Project version: 24.03.0
00:01:54.159  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:01:54.159  C linker for the host machine: cc ld.bfd 2.40-14
00:01:54.159  Host machine cpu family: x86_64
00:01:54.159  Host machine cpu: x86_64
00:01:54.159  Message: ## Building in Developer Mode ##
00:01:54.159  Program pkg-config found: YES (/usr/bin/pkg-config)
00:01:54.159  Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:01:54.159  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:01:54.159  Program python3 found: YES (/usr/bin/python3)
00:01:54.159  Program cat found: YES (/usr/bin/cat)
00:01:54.159  Compiler for C supports arguments -march=native: YES 
00:01:54.159  Checking for size of "void *" : 8 
00:01:54.159  Checking for size of "void *" : 8 (cached)
00:01:54.159  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:01:54.159  Library m found: YES
00:01:54.159  Library numa found: YES
00:01:54.159  Has header "numaif.h" : YES 
00:01:54.159  Library fdt found: NO
00:01:54.159  Library execinfo found: NO
00:01:54.159  Has header "execinfo.h" : YES 
00:01:54.159  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:01:54.159  Run-time dependency libarchive found: NO (tried pkgconfig)
00:01:54.159  Run-time dependency libbsd found: NO (tried pkgconfig)
00:01:54.159  Run-time dependency jansson found: NO (tried pkgconfig)
00:01:54.159  Run-time dependency openssl found: YES 3.1.1
00:01:54.159  Run-time dependency libpcap found: YES 1.10.4
00:01:54.159  Has header "pcap.h" with dependency libpcap: YES 
00:01:54.159  Compiler for C supports arguments -Wcast-qual: YES 
00:01:54.159  Compiler for C supports arguments -Wdeprecated: YES 
00:01:54.159  Compiler for C supports arguments -Wformat: YES 
00:01:54.159  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:01:54.159  Compiler for C supports arguments -Wformat-security: NO 
00:01:54.159  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:54.159  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:01:54.159  Compiler for C supports arguments -Wnested-externs: YES 
00:01:54.159  Compiler for C supports arguments -Wold-style-definition: YES 
00:01:54.159  Compiler for C supports arguments -Wpointer-arith: YES 
00:01:54.159  Compiler for C supports arguments -Wsign-compare: YES 
00:01:54.159  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:01:54.159  Compiler for C supports arguments -Wundef: YES 
00:01:54.159  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:54.159  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:01:54.159  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:01:54.159  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:54.159  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:01:54.159  Program objdump found: YES (/usr/bin/objdump)
00:01:54.159  Compiler for C supports arguments -mavx512f: YES 
00:01:54.159  Checking if "AVX512 checking" compiles: YES 
00:01:54.159  Fetching value of define "__SSE4_2__" : 1 
00:01:54.159  Fetching value of define "__AES__" : 1 
00:01:54.159  Fetching value of define "__AVX__" : 1 
00:01:54.159  Fetching value of define "__AVX2__" : (undefined) 
00:01:54.159  Fetching value of define "__AVX512BW__" : (undefined) 
00:01:54.159  Fetching value of define "__AVX512CD__" : (undefined) 
00:01:54.159  Fetching value of define "__AVX512DQ__" : (undefined) 
00:01:54.159  Fetching value of define "__AVX512F__" : (undefined) 
00:01:54.159  Fetching value of define "__AVX512VL__" : (undefined) 
00:01:54.159  Fetching value of define "__PCLMUL__" : 1 
00:01:54.159  Fetching value of define "__RDRND__" : 1 
00:01:54.159  Fetching value of define "__RDSEED__" : (undefined) 
00:01:54.159  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:01:54.159  Fetching value of define "__znver1__" : (undefined) 
00:01:54.159  Fetching value of define "__znver2__" : (undefined) 
00:01:54.159  Fetching value of define "__znver3__" : (undefined) 
00:01:54.159  Fetching value of define "__znver4__" : (undefined) 
00:01:54.159  Compiler for C supports arguments -Wno-format-truncation: YES 
00:01:54.159  Message: lib/log: Defining dependency "log"
00:01:54.159  Message: lib/kvargs: Defining dependency "kvargs"
00:01:54.159  Message: lib/telemetry: Defining dependency "telemetry"
00:01:54.159  Checking for function "getentropy" : NO 
00:01:54.159  Message: lib/eal: Defining dependency "eal"
00:01:54.159  Message: lib/ring: Defining dependency "ring"
00:01:54.159  Message: lib/rcu: Defining dependency "rcu"
00:01:54.159  Message: lib/mempool: Defining dependency "mempool"
00:01:54.159  Message: lib/mbuf: Defining dependency "mbuf"
00:01:54.159  Fetching value of define "__PCLMUL__" : 1 (cached)
00:01:54.159  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:54.159  Compiler for C supports arguments -mpclmul: YES 
00:01:54.159  Compiler for C supports arguments -maes: YES 
00:01:54.159  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:54.159  Compiler for C supports arguments -mavx512bw: YES 
00:01:54.159  Compiler for C supports arguments -mavx512dq: YES 
00:01:54.159  Compiler for C supports arguments -mavx512vl: YES 
00:01:54.159  Compiler for C supports arguments -mvpclmulqdq: YES 
00:01:54.159  Compiler for C supports arguments -mavx2: YES 
00:01:54.159  Compiler for C supports arguments -mavx: YES 
00:01:54.159  Message: lib/net: Defining dependency "net"
00:01:54.160  Message: lib/meter: Defining dependency "meter"
00:01:54.160  Message: lib/ethdev: Defining dependency "ethdev"
00:01:54.160  Message: lib/pci: Defining dependency "pci"
00:01:54.160  Message: lib/cmdline: Defining dependency "cmdline"
00:01:54.160  Message: lib/hash: Defining dependency "hash"
00:01:54.160  Message: lib/timer: Defining dependency "timer"
00:01:54.160  Message: lib/compressdev: Defining dependency "compressdev"
00:01:54.160  Message: lib/cryptodev: Defining dependency "cryptodev"
00:01:54.160  Message: lib/dmadev: Defining dependency "dmadev"
00:01:54.160  Compiler for C supports arguments -Wno-cast-qual: YES 
00:01:54.160  Message: lib/power: Defining dependency "power"
00:01:54.160  Message: lib/reorder: Defining dependency "reorder"
00:01:54.160  Message: lib/security: Defining dependency "security"
00:01:54.160  Has header "linux/userfaultfd.h" : YES 
00:01:54.160  Has header "linux/vduse.h" : YES 
00:01:54.160  Message: lib/vhost: Defining dependency "vhost"
00:01:54.160  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:01:54.160  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:01:54.160  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:01:54.160  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:01:54.160  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:01:54.160  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:01:54.160  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:01:54.160  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:01:54.160  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:01:54.160  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:01:54.160  Program doxygen found: YES (/usr/local/bin/doxygen)
00:01:54.160  Configuring doxy-api-html.conf using configuration
00:01:54.160  Configuring doxy-api-man.conf using configuration
00:01:54.160  Program mandb found: YES (/usr/bin/mandb)
00:01:54.160  Program sphinx-build found: NO
00:01:54.160  Configuring rte_build_config.h using configuration
00:01:54.160  Message: 
00:01:54.160  =================
00:01:54.160  Applications Enabled
00:01:54.160  =================
00:01:54.160  
00:01:54.160  apps:
00:01:54.160  	
00:01:54.160  
00:01:54.160  Message: 
00:01:54.160  =================
00:01:54.160  Libraries Enabled
00:01:54.160  =================
00:01:54.160  
00:01:54.160  libs:
00:01:54.160  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:01:54.160  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:01:54.160  	cryptodev, dmadev, power, reorder, security, vhost, 
00:01:54.160  
00:01:54.160  Message: 
00:01:54.160  ===============
00:01:54.160  Drivers Enabled
00:01:54.160  ===============
00:01:54.160  
00:01:54.160  common:
00:01:54.160  	
00:01:54.160  bus:
00:01:54.160  	pci, vdev, 
00:01:54.160  mempool:
00:01:54.160  	ring, 
00:01:54.160  dma:
00:01:54.160  	
00:01:54.160  net:
00:01:54.160  	
00:01:54.160  crypto:
00:01:54.160  	
00:01:54.160  compress:
00:01:54.160  	
00:01:54.160  vdpa:
00:01:54.160  	
00:01:54.160  
00:01:54.160  Message: 
00:01:54.160  =================
00:01:54.160  Content Skipped
00:01:54.160  =================
00:01:54.160  
00:01:54.160  apps:
00:01:54.160  	dumpcap:	explicitly disabled via build config
00:01:54.160  	graph:	explicitly disabled via build config
00:01:54.160  	pdump:	explicitly disabled via build config
00:01:54.160  	proc-info:	explicitly disabled via build config
00:01:54.160  	test-acl:	explicitly disabled via build config
00:01:54.160  	test-bbdev:	explicitly disabled via build config
00:01:54.160  	test-cmdline:	explicitly disabled via build config
00:01:54.160  	test-compress-perf:	explicitly disabled via build config
00:01:54.160  	test-crypto-perf:	explicitly disabled via build config
00:01:54.160  	test-dma-perf:	explicitly disabled via build config
00:01:54.160  	test-eventdev:	explicitly disabled via build config
00:01:54.160  	test-fib:	explicitly disabled via build config
00:01:54.160  	test-flow-perf:	explicitly disabled via build config
00:01:54.160  	test-gpudev:	explicitly disabled via build config
00:01:54.160  	test-mldev:	explicitly disabled via build config
00:01:54.160  	test-pipeline:	explicitly disabled via build config
00:01:54.160  	test-pmd:	explicitly disabled via build config
00:01:54.160  	test-regex:	explicitly disabled via build config
00:01:54.160  	test-sad:	explicitly disabled via build config
00:01:54.160  	test-security-perf:	explicitly disabled via build config
00:01:54.160  	
00:01:54.160  libs:
00:01:54.160  	argparse:	explicitly disabled via build config
00:01:54.160  	metrics:	explicitly disabled via build config
00:01:54.160  	acl:	explicitly disabled via build config
00:01:54.160  	bbdev:	explicitly disabled via build config
00:01:54.160  	bitratestats:	explicitly disabled via build config
00:01:54.160  	bpf:	explicitly disabled via build config
00:01:54.160  	cfgfile:	explicitly disabled via build config
00:01:54.160  	distributor:	explicitly disabled via build config
00:01:54.160  	efd:	explicitly disabled via build config
00:01:54.160  	eventdev:	explicitly disabled via build config
00:01:54.160  	dispatcher:	explicitly disabled via build config
00:01:54.160  	gpudev:	explicitly disabled via build config
00:01:54.160  	gro:	explicitly disabled via build config
00:01:54.160  	gso:	explicitly disabled via build config
00:01:54.160  	ip_frag:	explicitly disabled via build config
00:01:54.160  	jobstats:	explicitly disabled via build config
00:01:54.160  	latencystats:	explicitly disabled via build config
00:01:54.160  	lpm:	explicitly disabled via build config
00:01:54.160  	member:	explicitly disabled via build config
00:01:54.160  	pcapng:	explicitly disabled via build config
00:01:54.160  	rawdev:	explicitly disabled via build config
00:01:54.160  	regexdev:	explicitly disabled via build config
00:01:54.160  	mldev:	explicitly disabled via build config
00:01:54.160  	rib:	explicitly disabled via build config
00:01:54.160  	sched:	explicitly disabled via build config
00:01:54.160  	stack:	explicitly disabled via build config
00:01:54.160  	ipsec:	explicitly disabled via build config
00:01:54.160  	pdcp:	explicitly disabled via build config
00:01:54.160  	fib:	explicitly disabled via build config
00:01:54.160  	port:	explicitly disabled via build config
00:01:54.160  	pdump:	explicitly disabled via build config
00:01:54.160  	table:	explicitly disabled via build config
00:01:54.160  	pipeline:	explicitly disabled via build config
00:01:54.160  	graph:	explicitly disabled via build config
00:01:54.160  	node:	explicitly disabled via build config
00:01:54.160  	
00:01:54.160  drivers:
00:01:54.160  	common/cpt:	not in enabled drivers build config
00:01:54.160  	common/dpaax:	not in enabled drivers build config
00:01:54.160  	common/iavf:	not in enabled drivers build config
00:01:54.160  	common/idpf:	not in enabled drivers build config
00:01:54.160  	common/ionic:	not in enabled drivers build config
00:01:54.160  	common/mvep:	not in enabled drivers build config
00:01:54.160  	common/octeontx:	not in enabled drivers build config
00:01:54.160  	bus/auxiliary:	not in enabled drivers build config
00:01:54.160  	bus/cdx:	not in enabled drivers build config
00:01:54.160  	bus/dpaa:	not in enabled drivers build config
00:01:54.160  	bus/fslmc:	not in enabled drivers build config
00:01:54.160  	bus/ifpga:	not in enabled drivers build config
00:01:54.160  	bus/platform:	not in enabled drivers build config
00:01:54.160  	bus/uacce:	not in enabled drivers build config
00:01:54.160  	bus/vmbus:	not in enabled drivers build config
00:01:54.160  	common/cnxk:	not in enabled drivers build config
00:01:54.160  	common/mlx5:	not in enabled drivers build config
00:01:54.160  	common/nfp:	not in enabled drivers build config
00:01:54.160  	common/nitrox:	not in enabled drivers build config
00:01:54.160  	common/qat:	not in enabled drivers build config
00:01:54.160  	common/sfc_efx:	not in enabled drivers build config
00:01:54.160  	mempool/bucket:	not in enabled drivers build config
00:01:54.160  	mempool/cnxk:	not in enabled drivers build config
00:01:54.160  	mempool/dpaa:	not in enabled drivers build config
00:01:54.160  	mempool/dpaa2:	not in enabled drivers build config
00:01:54.160  	mempool/octeontx:	not in enabled drivers build config
00:01:54.160  	mempool/stack:	not in enabled drivers build config
00:01:54.160  	dma/cnxk:	not in enabled drivers build config
00:01:54.160  	dma/dpaa:	not in enabled drivers build config
00:01:54.160  	dma/dpaa2:	not in enabled drivers build config
00:01:54.160  	dma/hisilicon:	not in enabled drivers build config
00:01:54.160  	dma/idxd:	not in enabled drivers build config
00:01:54.160  	dma/ioat:	not in enabled drivers build config
00:01:54.160  	dma/skeleton:	not in enabled drivers build config
00:01:54.160  	net/af_packet:	not in enabled drivers build config
00:01:54.160  	net/af_xdp:	not in enabled drivers build config
00:01:54.160  	net/ark:	not in enabled drivers build config
00:01:54.160  	net/atlantic:	not in enabled drivers build config
00:01:54.160  	net/avp:	not in enabled drivers build config
00:01:54.160  	net/axgbe:	not in enabled drivers build config
00:01:54.160  	net/bnx2x:	not in enabled drivers build config
00:01:54.160  	net/bnxt:	not in enabled drivers build config
00:01:54.160  	net/bonding:	not in enabled drivers build config
00:01:54.160  	net/cnxk:	not in enabled drivers build config
00:01:54.160  	net/cpfl:	not in enabled drivers build config
00:01:54.160  	net/cxgbe:	not in enabled drivers build config
00:01:54.160  	net/dpaa:	not in enabled drivers build config
00:01:54.160  	net/dpaa2:	not in enabled drivers build config
00:01:54.160  	net/e1000:	not in enabled drivers build config
00:01:54.160  	net/ena:	not in enabled drivers build config
00:01:54.160  	net/enetc:	not in enabled drivers build config
00:01:54.160  	net/enetfec:	not in enabled drivers build config
00:01:54.160  	net/enic:	not in enabled drivers build config
00:01:54.160  	net/failsafe:	not in enabled drivers build config
00:01:54.160  	net/fm10k:	not in enabled drivers build config
00:01:54.160  	net/gve:	not in enabled drivers build config
00:01:54.160  	net/hinic:	not in enabled drivers build config
00:01:54.160  	net/hns3:	not in enabled drivers build config
00:01:54.160  	net/i40e:	not in enabled drivers build config
00:01:54.160  	net/iavf:	not in enabled drivers build config
00:01:54.160  	net/ice:	not in enabled drivers build config
00:01:54.160  	net/idpf:	not in enabled drivers build config
00:01:54.160  	net/igc:	not in enabled drivers build config
00:01:54.160  	net/ionic:	not in enabled drivers build config
00:01:54.160  	net/ipn3ke:	not in enabled drivers build config
00:01:54.161  	net/ixgbe:	not in enabled drivers build config
00:01:54.161  	net/mana:	not in enabled drivers build config
00:01:54.161  	net/memif:	not in enabled drivers build config
00:01:54.161  	net/mlx4:	not in enabled drivers build config
00:01:54.161  	net/mlx5:	not in enabled drivers build config
00:01:54.161  	net/mvneta:	not in enabled drivers build config
00:01:54.161  	net/mvpp2:	not in enabled drivers build config
00:01:54.161  	net/netvsc:	not in enabled drivers build config
00:01:54.161  	net/nfb:	not in enabled drivers build config
00:01:54.161  	net/nfp:	not in enabled drivers build config
00:01:54.161  	net/ngbe:	not in enabled drivers build config
00:01:54.161  	net/null:	not in enabled drivers build config
00:01:54.161  	net/octeontx:	not in enabled drivers build config
00:01:54.161  	net/octeon_ep:	not in enabled drivers build config
00:01:54.161  	net/pcap:	not in enabled drivers build config
00:01:54.161  	net/pfe:	not in enabled drivers build config
00:01:54.161  	net/qede:	not in enabled drivers build config
00:01:54.161  	net/ring:	not in enabled drivers build config
00:01:54.161  	net/sfc:	not in enabled drivers build config
00:01:54.161  	net/softnic:	not in enabled drivers build config
00:01:54.161  	net/tap:	not in enabled drivers build config
00:01:54.161  	net/thunderx:	not in enabled drivers build config
00:01:54.161  	net/txgbe:	not in enabled drivers build config
00:01:54.161  	net/vdev_netvsc:	not in enabled drivers build config
00:01:54.161  	net/vhost:	not in enabled drivers build config
00:01:54.161  	net/virtio:	not in enabled drivers build config
00:01:54.161  	net/vmxnet3:	not in enabled drivers build config
00:01:54.161  	raw/*:	missing internal dependency, "rawdev"
00:01:54.161  	crypto/armv8:	not in enabled drivers build config
00:01:54.161  	crypto/bcmfs:	not in enabled drivers build config
00:01:54.161  	crypto/caam_jr:	not in enabled drivers build config
00:01:54.161  	crypto/ccp:	not in enabled drivers build config
00:01:54.161  	crypto/cnxk:	not in enabled drivers build config
00:01:54.161  	crypto/dpaa_sec:	not in enabled drivers build config
00:01:54.161  	crypto/dpaa2_sec:	not in enabled drivers build config
00:01:54.161  	crypto/ipsec_mb:	not in enabled drivers build config
00:01:54.161  	crypto/mlx5:	not in enabled drivers build config
00:01:54.161  	crypto/mvsam:	not in enabled drivers build config
00:01:54.161  	crypto/nitrox:	not in enabled drivers build config
00:01:54.161  	crypto/null:	not in enabled drivers build config
00:01:54.161  	crypto/octeontx:	not in enabled drivers build config
00:01:54.161  	crypto/openssl:	not in enabled drivers build config
00:01:54.161  	crypto/scheduler:	not in enabled drivers build config
00:01:54.161  	crypto/uadk:	not in enabled drivers build config
00:01:54.161  	crypto/virtio:	not in enabled drivers build config
00:01:54.161  	compress/isal:	not in enabled drivers build config
00:01:54.161  	compress/mlx5:	not in enabled drivers build config
00:01:54.161  	compress/nitrox:	not in enabled drivers build config
00:01:54.161  	compress/octeontx:	not in enabled drivers build config
00:01:54.161  	compress/zlib:	not in enabled drivers build config
00:01:54.161  	regex/*:	missing internal dependency, "regexdev"
00:01:54.161  	ml/*:	missing internal dependency, "mldev"
00:01:54.161  	vdpa/ifc:	not in enabled drivers build config
00:01:54.161  	vdpa/mlx5:	not in enabled drivers build config
00:01:54.161  	vdpa/nfp:	not in enabled drivers build config
00:01:54.161  	vdpa/sfc:	not in enabled drivers build config
00:01:54.161  	event/*:	missing internal dependency, "eventdev"
00:01:54.161  	baseband/*:	missing internal dependency, "bbdev"
00:01:54.161  	gpu/*:	missing internal dependency, "gpudev"
00:01:54.161  	
00:01:54.161  
00:01:54.421  Build targets in project: 85
00:01:54.421  
00:01:54.421  DPDK 24.03.0
00:01:54.421  
00:01:54.421    User defined options
00:01:54.421      buildtype          : debug
00:01:54.421      default_library    : shared
00:01:54.421      libdir             : lib
00:01:54.421      prefix             : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:01:54.421      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:01:54.421      c_link_args        : 
00:01:54.421      cpu_instruction_set: native
00:01:54.421      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:01:54.421      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:01:54.421      enable_docs        : false
00:01:54.421      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:01:54.421      enable_kmods       : false
00:01:54.421      max_lcores         : 128
00:01:54.421      tests              : false
00:01:54.421  
00:01:54.421  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:54.998  ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp'
00:01:54.998  [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:01:54.998  [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:01:54.998  [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:01:54.998  [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:01:54.998  [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:01:54.998  [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:01:54.998  [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:01:54.998  [8/268] Linking static target lib/librte_kvargs.a
00:01:54.998  [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:01:54.998  [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:01:54.998  [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:01:54.998  [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:01:54.998  [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:01:54.998  [14/268] Linking static target lib/librte_log.a
00:01:54.998  [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:01:55.261  [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:01:55.838  [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.838  [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:01:55.838  [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:01:55.838  [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:01:55.838  [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:01:55.838  [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:01:55.838  [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:01:55.838  [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:01:55.838  [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:01:55.838  [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:01:55.838  [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:01:55.838  [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:01:55.838  [29/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:01:55.838  [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:01:55.838  [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:01:55.838  [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:01:55.838  [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:01:55.838  [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:01:55.838  [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:01:55.838  [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:01:55.838  [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:01:55.838  [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:01:55.838  [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:01:55.838  [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:01:55.838  [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:01:55.838  [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:01:55.838  [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:01:55.838  [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:01:55.838  [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:01:56.104  [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:01:56.104  [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:01:56.104  [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:01:56.104  [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:01:56.104  [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:01:56.104  [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:01:56.104  [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:01:56.105  [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:01:56.105  [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:01:56.105  [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:01:56.105  [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:01:56.105  [57/268] Linking static target lib/librte_telemetry.a
00:01:56.105  [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:01:56.105  [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:01:56.105  [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:01:56.105  [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:01:56.105  [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:01:56.105  [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:01:56.374  [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:01:56.374  [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:01:56.374  [66/268] Linking target lib/librte_log.so.24.1
00:01:56.374  [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:01:56.635  [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:01:56.635  [69/268] Linking static target lib/librte_pci.a
00:01:56.635  [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:01:56.635  [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:01:56.635  [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:01:56.635  [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:01:56.901  [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:01:56.901  [75/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:01:56.901  [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:01:56.901  [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:01:56.901  [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:01:56.901  [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:01:56.901  [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:01:56.901  [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:01:56.901  [82/268] Linking target lib/librte_kvargs.so.24.1
00:01:56.901  [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:01:56.901  [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:01:56.901  [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:01:56.901  [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:01:56.901  [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:01:56.901  [88/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:01:56.901  [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:01:56.901  [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:01:56.901  [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:01:56.901  [92/268] Linking static target lib/librte_meter.a
00:01:56.901  [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:01:56.901  [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:01:56.902  [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:01:56.902  [96/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:01:56.902  [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:01:56.902  [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:01:56.902  [99/268] Linking static target lib/librte_ring.a
00:01:56.902  [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:01:56.902  [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:01:56.902  [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:01:57.166  [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:01:57.166  [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:01:57.166  [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:01:57.166  [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:01:57.166  [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.166  [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.167  [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:01:57.167  [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:01:57.167  [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:01:57.167  [112/268] Linking static target lib/librte_rcu.a
00:01:57.167  [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:01:57.167  [114/268] Linking static target lib/librte_mempool.a
00:01:57.167  [115/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:01:57.167  [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:01:57.167  [117/268] Linking target lib/librte_telemetry.so.24.1
00:01:57.167  [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:01:57.167  [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:01:57.167  [120/268] Linking static target lib/librte_eal.a
00:01:57.167  [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:01:57.167  [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:01:57.167  [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:01:57.167  [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:01:57.167  [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:01:57.432  [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:01:57.432  [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:01:57.432  [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:01:57.432  [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:01:57.432  [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:01:57.432  [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:01:57.432  [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:01:57.432  [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:01:57.432  [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:01:57.432  [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:01:57.432  [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:01:57.432  [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.696  [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:01:57.696  [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:01:57.696  [140/268] Linking static target lib/librte_net.a
00:01:57.696  [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:01:57.696  [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.696  [143/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.696  [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:01:57.956  [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:01:57.956  [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:01:57.956  [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:01:57.956  [148/268] Linking static target lib/librte_cmdline.a
00:01:57.956  [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:01:57.956  [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:01:57.956  [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:01:57.956  [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:01:57.956  [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:01:57.956  [154/268] Linking static target lib/librte_timer.a
00:01:57.956  [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:01:58.214  [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:01:58.214  [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:01:58.214  [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:01:58.214  [159/268] Linking static target lib/librte_dmadev.a
00:01:58.214  [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.214  [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:01:58.214  [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:01:58.214  [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:01:58.214  [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:01:58.214  [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:01:58.214  [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:01:58.214  [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:01:58.214  [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.214  [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:01:58.472  [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:01:58.472  [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.472  [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:01:58.472  [173/268] Linking static target lib/librte_compressdev.a
00:01:58.472  [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:01:58.472  [175/268] Linking static target lib/librte_power.a
00:01:58.472  [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:01:58.472  [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:01:58.472  [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:01:58.472  [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:01:58.472  [180/268] Linking static target lib/librte_hash.a
00:01:58.472  [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:01:58.472  [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:01:58.472  [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:01:58.472  [184/268] Linking static target lib/librte_reorder.a
00:01:58.472  [185/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:01:58.730  [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:01:58.730  [187/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:01:58.730  [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:01:58.730  [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.730  [190/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:01:58.730  [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:01:58.730  [192/268] Linking static target lib/librte_mbuf.a
00:01:58.730  [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:01:58.730  [194/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:01:58.730  [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:01:58.730  [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.730  [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:01:58.989  [198/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.989  [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:01:58.989  [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:01:58.989  [201/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.989  [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:01:58.989  [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:58.989  [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:58.989  [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:01:58.989  [206/268] Linking static target drivers/librte_bus_vdev.a
00:01:58.989  [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:58.989  [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:58.989  [209/268] Linking static target drivers/librte_bus_pci.a
00:01:58.989  [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.989  [211/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.989  [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:01:58.989  [213/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:01:58.989  [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:01:58.989  [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:01:58.989  [216/268] Linking static target drivers/librte_mempool_ring.a
00:01:59.247  [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.247  [218/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:01:59.247  [219/268] Linking static target lib/librte_security.a
00:01:59.247  [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.247  [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:01:59.247  [222/268] Linking static target lib/librte_ethdev.a
00:01:59.247  [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:01:59.247  [224/268] Linking static target lib/librte_cryptodev.a
00:01:59.505  [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.505  [226/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:00.440  [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:01.814  [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:02:03.188  [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:02:03.188  [230/268] Linking target lib/librte_eal.so.24.1
00:02:03.447  [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:03.447  [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:02:03.447  [233/268] Linking target lib/librte_ring.so.24.1
00:02:03.447  [234/268] Linking target lib/librte_meter.so.24.1
00:02:03.447  [235/268] Linking target lib/librte_pci.so.24.1
00:02:03.447  [236/268] Linking target lib/librte_timer.so.24.1
00:02:03.447  [237/268] Linking target lib/librte_dmadev.so.24.1
00:02:03.447  [238/268] Linking target drivers/librte_bus_vdev.so.24.1
00:02:03.447  [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:02:03.447  [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:02:03.447  [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:02:03.447  [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:02:03.447  [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:02:03.705  [244/268] Linking target lib/librte_rcu.so.24.1
00:02:03.705  [245/268] Linking target lib/librte_mempool.so.24.1
00:02:03.705  [246/268] Linking target drivers/librte_bus_pci.so.24.1
00:02:03.705  [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:02:03.705  [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:02:03.705  [249/268] Linking target drivers/librte_mempool_ring.so.24.1
00:02:03.705  [250/268] Linking target lib/librte_mbuf.so.24.1
00:02:03.964  [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:02:03.964  [252/268] Linking target lib/librte_reorder.so.24.1
00:02:03.964  [253/268] Linking target lib/librte_compressdev.so.24.1
00:02:03.964  [254/268] Linking target lib/librte_net.so.24.1
00:02:03.964  [255/268] Linking target lib/librte_cryptodev.so.24.1
00:02:03.964  [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:02:03.964  [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:02:03.964  [258/268] Linking target lib/librte_hash.so.24.1
00:02:03.964  [259/268] Linking target lib/librte_cmdline.so.24.1
00:02:03.964  [260/268] Linking target lib/librte_security.so.24.1
00:02:04.221  [261/268] Linking target lib/librte_ethdev.so.24.1
00:02:04.221  [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:02:04.221  [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:02:04.221  [264/268] Linking target lib/librte_power.so.24.1
00:02:08.406  [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:08.406  [266/268] Linking static target lib/librte_vhost.a
00:02:08.673  [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.673  [268/268] Linking target lib/librte_vhost.so.24.1
00:02:08.673  INFO: autodetecting backend as ninja
00:02:08.673  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48
00:02:30.601    CC lib/ut_mock/mock.o
00:02:30.601    CC lib/ut/ut.o
00:02:30.601    CC lib/log/log.o
00:02:30.601    CC lib/log/log_flags.o
00:02:30.601    CC lib/log/log_deprecated.o
00:02:30.601    LIB libspdk_ut.a
00:02:30.601    LIB libspdk_ut_mock.a
00:02:30.601    LIB libspdk_log.a
00:02:30.601    SO libspdk_ut.so.2.0
00:02:30.601    SO libspdk_ut_mock.so.6.0
00:02:30.601    SO libspdk_log.so.7.1
00:02:30.601    SYMLINK libspdk_ut_mock.so
00:02:30.601    SYMLINK libspdk_ut.so
00:02:30.601    SYMLINK libspdk_log.so
00:02:30.601    CC lib/ioat/ioat.o
00:02:30.601    CC lib/dma/dma.o
00:02:30.601    CXX lib/trace_parser/trace.o
00:02:30.601    CC lib/util/base64.o
00:02:30.602    CC lib/util/bit_array.o
00:02:30.602    CC lib/util/cpuset.o
00:02:30.602    CC lib/util/crc16.o
00:02:30.602    CC lib/util/crc32.o
00:02:30.602    CC lib/util/crc32c.o
00:02:30.602    CC lib/util/crc32_ieee.o
00:02:30.602    CC lib/util/crc64.o
00:02:30.602    CC lib/util/dif.o
00:02:30.602    CC lib/util/fd.o
00:02:30.602    CC lib/util/fd_group.o
00:02:30.602    CC lib/util/file.o
00:02:30.602    CC lib/util/hexlify.o
00:02:30.602    CC lib/util/iov.o
00:02:30.602    CC lib/util/math.o
00:02:30.602    CC lib/util/net.o
00:02:30.602    CC lib/util/pipe.o
00:02:30.602    CC lib/util/string.o
00:02:30.602    CC lib/util/strerror_tls.o
00:02:30.602    CC lib/util/uuid.o
00:02:30.602    CC lib/util/xor.o
00:02:30.602    CC lib/util/zipf.o
00:02:30.602    CC lib/util/md5.o
00:02:30.602    CC lib/vfio_user/host/vfio_user.o
00:02:30.602    CC lib/vfio_user/host/vfio_user_pci.o
00:02:30.602    LIB libspdk_dma.a
00:02:30.602    SO libspdk_dma.so.5.0
00:02:30.602    SYMLINK libspdk_dma.so
00:02:30.602    LIB libspdk_ioat.a
00:02:30.602    SO libspdk_ioat.so.7.0
00:02:30.602    SYMLINK libspdk_ioat.so
00:02:30.602    LIB libspdk_vfio_user.a
00:02:30.602    SO libspdk_vfio_user.so.5.0
00:02:30.602    SYMLINK libspdk_vfio_user.so
00:02:30.602    LIB libspdk_util.a
00:02:30.602    SO libspdk_util.so.10.1
00:02:30.602    SYMLINK libspdk_util.so
00:02:30.602    CC lib/conf/conf.o
00:02:30.602    CC lib/rdma_utils/rdma_utils.o
00:02:30.602    CC lib/json/json_parse.o
00:02:30.602    CC lib/idxd/idxd.o
00:02:30.602    CC lib/json/json_util.o
00:02:30.602    CC lib/env_dpdk/env.o
00:02:30.602    CC lib/json/json_write.o
00:02:30.602    CC lib/idxd/idxd_user.o
00:02:30.602    CC lib/vmd/vmd.o
00:02:30.602    CC lib/env_dpdk/memory.o
00:02:30.602    CC lib/idxd/idxd_kernel.o
00:02:30.602    CC lib/vmd/led.o
00:02:30.602    CC lib/env_dpdk/pci.o
00:02:30.602    CC lib/env_dpdk/init.o
00:02:30.602    CC lib/env_dpdk/threads.o
00:02:30.602    CC lib/env_dpdk/pci_ioat.o
00:02:30.602    CC lib/env_dpdk/pci_virtio.o
00:02:30.602    CC lib/env_dpdk/pci_vmd.o
00:02:30.602    CC lib/env_dpdk/pci_idxd.o
00:02:30.602    CC lib/env_dpdk/pci_event.o
00:02:30.602    CC lib/env_dpdk/sigbus_handler.o
00:02:30.602    CC lib/env_dpdk/pci_dpdk.o
00:02:30.602    CC lib/env_dpdk/pci_dpdk_2211.o
00:02:30.602    CC lib/env_dpdk/pci_dpdk_2207.o
00:02:30.602    LIB libspdk_conf.a
00:02:30.602    SO libspdk_conf.so.6.0
00:02:30.602    LIB libspdk_rdma_utils.a
00:02:30.602    LIB libspdk_json.a
00:02:30.602    SYMLINK libspdk_conf.so
00:02:30.602    SO libspdk_rdma_utils.so.1.0
00:02:30.602    SO libspdk_json.so.6.0
00:02:30.602    SYMLINK libspdk_rdma_utils.so
00:02:30.602    SYMLINK libspdk_json.so
00:02:30.602    CC lib/rdma_provider/common.o
00:02:30.602    CC lib/rdma_provider/rdma_provider_verbs.o
00:02:30.602    CC lib/jsonrpc/jsonrpc_server.o
00:02:30.602    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:02:30.602    CC lib/jsonrpc/jsonrpc_client.o
00:02:30.602    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:02:30.602    LIB libspdk_idxd.a
00:02:30.602    SO libspdk_idxd.so.12.1
00:02:30.602    SYMLINK libspdk_idxd.so
00:02:30.602    LIB libspdk_vmd.a
00:02:30.602    LIB libspdk_rdma_provider.a
00:02:30.602    SO libspdk_vmd.so.6.0
00:02:30.602    SO libspdk_rdma_provider.so.7.0
00:02:30.602    LIB libspdk_jsonrpc.a
00:02:30.602    SYMLINK libspdk_vmd.so
00:02:30.602    SO libspdk_jsonrpc.so.6.0
00:02:30.602    SYMLINK libspdk_rdma_provider.so
00:02:30.602    SYMLINK libspdk_jsonrpc.so
00:02:30.602    LIB libspdk_trace_parser.a
00:02:30.602    SO libspdk_trace_parser.so.6.0
00:02:30.602    SYMLINK libspdk_trace_parser.so
00:02:30.602    CC lib/rpc/rpc.o
00:02:30.860    LIB libspdk_rpc.a
00:02:30.860    SO libspdk_rpc.so.6.0
00:02:30.860    SYMLINK libspdk_rpc.so
00:02:31.118    CC lib/keyring/keyring.o
00:02:31.118    CC lib/keyring/keyring_rpc.o
00:02:31.118    CC lib/trace/trace.o
00:02:31.118    CC lib/notify/notify.o
00:02:31.118    CC lib/trace/trace_flags.o
00:02:31.118    CC lib/notify/notify_rpc.o
00:02:31.118    CC lib/trace/trace_rpc.o
00:02:31.377    LIB libspdk_notify.a
00:02:31.377    SO libspdk_notify.so.6.0
00:02:31.377    SYMLINK libspdk_notify.so
00:02:31.377    LIB libspdk_keyring.a
00:02:31.377    LIB libspdk_trace.a
00:02:31.377    SO libspdk_keyring.so.2.0
00:02:31.377    SO libspdk_trace.so.11.0
00:02:31.377    SYMLINK libspdk_keyring.so
00:02:31.377    SYMLINK libspdk_trace.so
00:02:31.635    CC lib/thread/thread.o
00:02:31.635    CC lib/thread/iobuf.o
00:02:31.635    CC lib/sock/sock.o
00:02:31.635    CC lib/sock/sock_rpc.o
00:02:31.635    LIB libspdk_env_dpdk.a
00:02:31.635    SO libspdk_env_dpdk.so.15.1
00:02:31.893    SYMLINK libspdk_env_dpdk.so
00:02:32.152    LIB libspdk_sock.a
00:02:32.152    SO libspdk_sock.so.10.0
00:02:32.152    SYMLINK libspdk_sock.so
00:02:32.152    CC lib/nvme/nvme_ctrlr_cmd.o
00:02:32.152    CC lib/nvme/nvme_ctrlr.o
00:02:32.411    CC lib/nvme/nvme_fabric.o
00:02:32.411    CC lib/nvme/nvme_ns_cmd.o
00:02:32.411    CC lib/nvme/nvme_ns.o
00:02:32.411    CC lib/nvme/nvme_pcie_common.o
00:02:32.411    CC lib/nvme/nvme_pcie.o
00:02:32.411    CC lib/nvme/nvme_qpair.o
00:02:32.411    CC lib/nvme/nvme.o
00:02:32.411    CC lib/nvme/nvme_quirks.o
00:02:32.411    CC lib/nvme/nvme_transport.o
00:02:32.411    CC lib/nvme/nvme_discovery.o
00:02:32.411    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:02:32.411    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:02:32.411    CC lib/nvme/nvme_tcp.o
00:02:32.411    CC lib/nvme/nvme_opal.o
00:02:32.411    CC lib/nvme/nvme_io_msg.o
00:02:32.411    CC lib/nvme/nvme_poll_group.o
00:02:32.411    CC lib/nvme/nvme_zns.o
00:02:32.411    CC lib/nvme/nvme_stubs.o
00:02:32.411    CC lib/nvme/nvme_auth.o
00:02:32.411    CC lib/nvme/nvme_cuse.o
00:02:32.411    CC lib/nvme/nvme_vfio_user.o
00:02:32.411    CC lib/nvme/nvme_rdma.o
00:02:33.347    LIB libspdk_thread.a
00:02:33.347    SO libspdk_thread.so.11.0
00:02:33.347    SYMLINK libspdk_thread.so
00:02:33.605    CC lib/accel/accel.o
00:02:33.605    CC lib/accel/accel_rpc.o
00:02:33.605    CC lib/fsdev/fsdev.o
00:02:33.605    CC lib/fsdev/fsdev_io.o
00:02:33.605    CC lib/blob/blobstore.o
00:02:33.605    CC lib/accel/accel_sw.o
00:02:33.605    CC lib/fsdev/fsdev_rpc.o
00:02:33.605    CC lib/blob/request.o
00:02:33.605    CC lib/vfu_tgt/tgt_endpoint.o
00:02:33.605    CC lib/virtio/virtio.o
00:02:33.605    CC lib/blob/zeroes.o
00:02:33.605    CC lib/virtio/virtio_vhost_user.o
00:02:33.605    CC lib/blob/blob_bs_dev.o
00:02:33.605    CC lib/vfu_tgt/tgt_rpc.o
00:02:33.605    CC lib/init/json_config.o
00:02:33.605    CC lib/virtio/virtio_vfio_user.o
00:02:33.605    CC lib/init/subsystem.o
00:02:33.605    CC lib/virtio/virtio_pci.o
00:02:33.605    CC lib/init/subsystem_rpc.o
00:02:33.605    CC lib/init/rpc.o
00:02:33.863    LIB libspdk_init.a
00:02:33.863    SO libspdk_init.so.6.0
00:02:33.863    LIB libspdk_virtio.a
00:02:33.863    SYMLINK libspdk_init.so
00:02:33.863    LIB libspdk_vfu_tgt.a
00:02:34.121    SO libspdk_vfu_tgt.so.3.0
00:02:34.121    SO libspdk_virtio.so.7.0
00:02:34.121    SYMLINK libspdk_vfu_tgt.so
00:02:34.121    SYMLINK libspdk_virtio.so
00:02:34.121    CC lib/event/app.o
00:02:34.121    CC lib/event/reactor.o
00:02:34.121    CC lib/event/log_rpc.o
00:02:34.121    CC lib/event/app_rpc.o
00:02:34.121    CC lib/event/scheduler_static.o
00:02:34.379    LIB libspdk_fsdev.a
00:02:34.379    SO libspdk_fsdev.so.2.0
00:02:34.379    SYMLINK libspdk_fsdev.so
00:02:34.638    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:02:34.638    LIB libspdk_event.a
00:02:34.638    SO libspdk_event.so.14.0
00:02:34.638    SYMLINK libspdk_event.so
00:02:34.638    LIB libspdk_nvme.a
00:02:34.638    LIB libspdk_accel.a
00:02:34.897    SO libspdk_accel.so.16.0
00:02:34.897    SYMLINK libspdk_accel.so
00:02:34.897    SO libspdk_nvme.so.15.0
00:02:34.897    CC lib/bdev/bdev.o
00:02:34.897    CC lib/bdev/bdev_rpc.o
00:02:34.897    CC lib/bdev/bdev_zone.o
00:02:34.897    CC lib/bdev/part.o
00:02:34.897    CC lib/bdev/scsi_nvme.o
00:02:35.156    SYMLINK libspdk_nvme.so
00:02:35.157    LIB libspdk_fuse_dispatcher.a
00:02:35.157    SO libspdk_fuse_dispatcher.so.1.0
00:02:35.416    SYMLINK libspdk_fuse_dispatcher.so
00:02:36.794    LIB libspdk_blob.a
00:02:36.794    SO libspdk_blob.so.12.0
00:02:36.794    SYMLINK libspdk_blob.so
00:02:37.051    CC lib/blobfs/blobfs.o
00:02:37.051    CC lib/blobfs/tree.o
00:02:37.051    CC lib/lvol/lvol.o
00:02:37.617    LIB libspdk_bdev.a
00:02:37.876    SO libspdk_bdev.so.17.0
00:02:37.876    SYMLINK libspdk_bdev.so
00:02:37.876    LIB libspdk_blobfs.a
00:02:37.876    SO libspdk_blobfs.so.11.0
00:02:37.876    SYMLINK libspdk_blobfs.so
00:02:37.876    LIB libspdk_lvol.a
00:02:37.876    SO libspdk_lvol.so.11.0
00:02:37.876    CC lib/ublk/ublk.o
00:02:37.876    CC lib/nbd/nbd.o
00:02:37.876    CC lib/ublk/ublk_rpc.o
00:02:37.876    CC lib/nbd/nbd_rpc.o
00:02:37.876    CC lib/scsi/dev.o
00:02:37.876    CC lib/scsi/lun.o
00:02:37.876    CC lib/scsi/port.o
00:02:37.876    CC lib/nvmf/ctrlr.o
00:02:37.876    CC lib/nvmf/ctrlr_discovery.o
00:02:37.876    CC lib/scsi/scsi.o
00:02:37.876    CC lib/ftl/ftl_core.o
00:02:37.876    CC lib/nvmf/ctrlr_bdev.o
00:02:37.876    CC lib/scsi/scsi_bdev.o
00:02:37.876    CC lib/ftl/ftl_init.o
00:02:37.876    CC lib/scsi/scsi_pr.o
00:02:37.876    CC lib/nvmf/subsystem.o
00:02:37.876    CC lib/ftl/ftl_layout.o
00:02:37.876    CC lib/nvmf/nvmf.o
00:02:37.876    CC lib/scsi/scsi_rpc.o
00:02:37.876    CC lib/ftl/ftl_debug.o
00:02:37.876    CC lib/nvmf/nvmf_rpc.o
00:02:37.876    CC lib/ftl/ftl_io.o
00:02:37.876    CC lib/scsi/task.o
00:02:37.876    CC lib/nvmf/transport.o
00:02:37.876    CC lib/ftl/ftl_l2p.o
00:02:37.876    CC lib/nvmf/tcp.o
00:02:37.876    CC lib/ftl/ftl_sb.o
00:02:37.876    CC lib/nvmf/stubs.o
00:02:37.876    CC lib/ftl/ftl_l2p_flat.o
00:02:37.876    CC lib/nvmf/mdns_server.o
00:02:37.876    CC lib/ftl/ftl_nv_cache.o
00:02:37.876    CC lib/nvmf/vfio_user.o
00:02:37.876    CC lib/ftl/ftl_band.o
00:02:37.876    CC lib/nvmf/rdma.o
00:02:37.876    CC lib/ftl/ftl_band_ops.o
00:02:37.876    CC lib/nvmf/auth.o
00:02:37.876    CC lib/ftl/ftl_rq.o
00:02:37.876    CC lib/ftl/ftl_writer.o
00:02:37.876    CC lib/ftl/ftl_reloc.o
00:02:37.876    CC lib/ftl/ftl_l2p_cache.o
00:02:38.142    CC lib/ftl/ftl_p2l.o
00:02:38.142    CC lib/ftl/ftl_p2l_log.o
00:02:38.142    CC lib/ftl/mngt/ftl_mngt.o
00:02:38.142    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:02:38.142    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:02:38.142    CC lib/ftl/mngt/ftl_mngt_startup.o
00:02:38.142    CC lib/ftl/mngt/ftl_mngt_md.o
00:02:38.142    SYMLINK libspdk_lvol.so
00:02:38.142    CC lib/ftl/mngt/ftl_mngt_misc.o
00:02:38.404    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:02:38.404    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:02:38.404    CC lib/ftl/mngt/ftl_mngt_band.o
00:02:38.404    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:02:38.405    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:02:38.405    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:02:38.405    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:02:38.405    CC lib/ftl/utils/ftl_conf.o
00:02:38.405    CC lib/ftl/utils/ftl_md.o
00:02:38.405    CC lib/ftl/utils/ftl_mempool.o
00:02:38.405    CC lib/ftl/utils/ftl_bitmap.o
00:02:38.405    CC lib/ftl/utils/ftl_property.o
00:02:38.405    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:02:38.405    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:02:38.405    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:02:38.671    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:02:38.671    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:02:38.671    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:02:38.671    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:02:38.671    CC lib/ftl/upgrade/ftl_sb_v3.o
00:02:38.671    CC lib/ftl/upgrade/ftl_sb_v5.o
00:02:38.671    CC lib/ftl/nvc/ftl_nvc_dev.o
00:02:38.671    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:02:38.671    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:02:38.671    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:02:38.671    CC lib/ftl/base/ftl_base_dev.o
00:02:38.671    CC lib/ftl/base/ftl_base_bdev.o
00:02:38.671    CC lib/ftl/ftl_trace.o
00:02:38.931    LIB libspdk_nbd.a
00:02:38.931    SO libspdk_nbd.so.7.0
00:02:38.931    LIB libspdk_scsi.a
00:02:38.931    SYMLINK libspdk_nbd.so
00:02:38.931    SO libspdk_scsi.so.9.0
00:02:39.190    SYMLINK libspdk_scsi.so
00:02:39.190    LIB libspdk_ublk.a
00:02:39.190    SO libspdk_ublk.so.3.0
00:02:39.190    SYMLINK libspdk_ublk.so
00:02:39.190    CC lib/iscsi/conn.o
00:02:39.190    CC lib/vhost/vhost.o
00:02:39.190    CC lib/iscsi/init_grp.o
00:02:39.190    CC lib/iscsi/iscsi.o
00:02:39.190    CC lib/vhost/vhost_rpc.o
00:02:39.190    CC lib/iscsi/param.o
00:02:39.190    CC lib/vhost/vhost_scsi.o
00:02:39.190    CC lib/vhost/vhost_blk.o
00:02:39.190    CC lib/iscsi/portal_grp.o
00:02:39.190    CC lib/iscsi/tgt_node.o
00:02:39.190    CC lib/vhost/rte_vhost_user.o
00:02:39.190    CC lib/iscsi/iscsi_subsystem.o
00:02:39.190    CC lib/iscsi/iscsi_rpc.o
00:02:39.190    CC lib/iscsi/task.o
00:02:39.449    LIB libspdk_ftl.a
00:02:39.708    SO libspdk_ftl.so.9.0
00:02:39.966    SYMLINK libspdk_ftl.so
00:02:40.532    LIB libspdk_vhost.a
00:02:40.532    SO libspdk_vhost.so.8.0
00:02:40.532    SYMLINK libspdk_vhost.so
00:02:40.791    LIB libspdk_iscsi.a
00:02:40.791    LIB libspdk_nvmf.a
00:02:40.791    SO libspdk_iscsi.so.8.0
00:02:40.791    SO libspdk_nvmf.so.20.0
00:02:40.791    SYMLINK libspdk_iscsi.so
00:02:41.049    SYMLINK libspdk_nvmf.so
00:02:41.307    CC module/env_dpdk/env_dpdk_rpc.o
00:02:41.307    CC module/vfu_device/vfu_virtio.o
00:02:41.307    CC module/vfu_device/vfu_virtio_blk.o
00:02:41.307    CC module/vfu_device/vfu_virtio_scsi.o
00:02:41.307    CC module/vfu_device/vfu_virtio_rpc.o
00:02:41.307    CC module/vfu_device/vfu_virtio_fs.o
00:02:41.307    CC module/accel/error/accel_error.o
00:02:41.307    CC module/blob/bdev/blob_bdev.o
00:02:41.307    CC module/accel/ioat/accel_ioat.o
00:02:41.307    CC module/accel/error/accel_error_rpc.o
00:02:41.307    CC module/keyring/linux/keyring.o
00:02:41.307    CC module/accel/ioat/accel_ioat_rpc.o
00:02:41.307    CC module/accel/dsa/accel_dsa.o
00:02:41.307    CC module/keyring/file/keyring.o
00:02:41.307    CC module/keyring/linux/keyring_rpc.o
00:02:41.307    CC module/keyring/file/keyring_rpc.o
00:02:41.307    CC module/accel/dsa/accel_dsa_rpc.o
00:02:41.307    CC module/scheduler/dynamic/scheduler_dynamic.o
00:02:41.307    CC module/sock/posix/posix.o
00:02:41.307    CC module/fsdev/aio/fsdev_aio.o
00:02:41.307    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:02:41.307    CC module/fsdev/aio/fsdev_aio_rpc.o
00:02:41.307    CC module/fsdev/aio/linux_aio_mgr.o
00:02:41.307    CC module/scheduler/gscheduler/gscheduler.o
00:02:41.307    CC module/accel/iaa/accel_iaa_rpc.o
00:02:41.307    CC module/accel/iaa/accel_iaa.o
00:02:41.307    LIB libspdk_env_dpdk_rpc.a
00:02:41.307    SO libspdk_env_dpdk_rpc.so.6.0
00:02:41.565    SYMLINK libspdk_env_dpdk_rpc.so
00:02:41.565    LIB libspdk_keyring_file.a
00:02:41.565    LIB libspdk_scheduler_gscheduler.a
00:02:41.565    LIB libspdk_scheduler_dpdk_governor.a
00:02:41.565    SO libspdk_keyring_file.so.2.0
00:02:41.565    SO libspdk_scheduler_gscheduler.so.4.0
00:02:41.565    SO libspdk_scheduler_dpdk_governor.so.4.0
00:02:41.565    LIB libspdk_accel_error.a
00:02:41.565    LIB libspdk_keyring_linux.a
00:02:41.565    SYMLINK libspdk_scheduler_gscheduler.so
00:02:41.565    SYMLINK libspdk_keyring_file.so
00:02:41.565    SO libspdk_accel_error.so.2.0
00:02:41.565    SYMLINK libspdk_scheduler_dpdk_governor.so
00:02:41.565    SO libspdk_keyring_linux.so.1.0
00:02:41.565    LIB libspdk_accel_ioat.a
00:02:41.565    LIB libspdk_blob_bdev.a
00:02:41.565    SYMLINK libspdk_accel_error.so
00:02:41.565    LIB libspdk_scheduler_dynamic.a
00:02:41.565    LIB libspdk_accel_iaa.a
00:02:41.565    SO libspdk_blob_bdev.so.12.0
00:02:41.565    LIB libspdk_accel_dsa.a
00:02:41.565    SO libspdk_accel_ioat.so.6.0
00:02:41.565    SYMLINK libspdk_keyring_linux.so
00:02:41.565    SO libspdk_scheduler_dynamic.so.4.0
00:02:41.824    SO libspdk_accel_dsa.so.5.0
00:02:41.824    SO libspdk_accel_iaa.so.3.0
00:02:41.824    SYMLINK libspdk_blob_bdev.so
00:02:41.824    SYMLINK libspdk_accel_ioat.so
00:02:41.824    SYMLINK libspdk_scheduler_dynamic.so
00:02:41.824    SYMLINK libspdk_accel_iaa.so
00:02:41.824    SYMLINK libspdk_accel_dsa.so
00:02:41.824    LIB libspdk_vfu_device.a
00:02:42.140    SO libspdk_vfu_device.so.3.0
00:02:42.140    CC module/bdev/gpt/gpt.o
00:02:42.140    CC module/bdev/gpt/vbdev_gpt.o
00:02:42.140    CC module/bdev/delay/vbdev_delay.o
00:02:42.140    CC module/blobfs/bdev/blobfs_bdev.o
00:02:42.140    CC module/bdev/delay/vbdev_delay_rpc.o
00:02:42.140    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:02:42.140    CC module/bdev/malloc/bdev_malloc.o
00:02:42.140    CC module/bdev/nvme/bdev_nvme.o
00:02:42.140    CC module/bdev/zone_block/vbdev_zone_block.o
00:02:42.140    CC module/bdev/passthru/vbdev_passthru.o
00:02:42.140    CC module/bdev/malloc/bdev_malloc_rpc.o
00:02:42.140    CC module/bdev/error/vbdev_error.o
00:02:42.140    CC module/bdev/error/vbdev_error_rpc.o
00:02:42.140    CC module/bdev/nvme/bdev_nvme_rpc.o
00:02:42.140    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:02:42.140    CC module/bdev/aio/bdev_aio.o
00:02:42.140    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:02:42.140    CC module/bdev/nvme/nvme_rpc.o
00:02:42.140    CC module/bdev/aio/bdev_aio_rpc.o
00:02:42.140    CC module/bdev/nvme/bdev_mdns_client.o
00:02:42.140    CC module/bdev/split/vbdev_split.o
00:02:42.140    CC module/bdev/lvol/vbdev_lvol.o
00:02:42.140    CC module/bdev/nvme/vbdev_opal.o
00:02:42.140    CC module/bdev/null/bdev_null.o
00:02:42.140    CC module/bdev/raid/bdev_raid.o
00:02:42.140    CC module/bdev/null/bdev_null_rpc.o
00:02:42.140    CC module/bdev/raid/bdev_raid_rpc.o
00:02:42.140    CC module/bdev/split/vbdev_split_rpc.o
00:02:42.140    CC module/bdev/nvme/vbdev_opal_rpc.o
00:02:42.140    CC module/bdev/raid/bdev_raid_sb.o
00:02:42.140    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:02:42.140    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:02:42.140    CC module/bdev/raid/raid0.o
00:02:42.140    CC module/bdev/iscsi/bdev_iscsi.o
00:02:42.140    CC module/bdev/raid/raid1.o
00:02:42.140    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:02:42.140    CC module/bdev/raid/concat.o
00:02:42.140    CC module/bdev/virtio/bdev_virtio_scsi.o
00:02:42.140    CC module/bdev/virtio/bdev_virtio_blk.o
00:02:42.140    CC module/bdev/ftl/bdev_ftl.o
00:02:42.140    CC module/bdev/ftl/bdev_ftl_rpc.o
00:02:42.140    CC module/bdev/virtio/bdev_virtio_rpc.o
00:02:42.140    SYMLINK libspdk_vfu_device.so
00:02:42.140    LIB libspdk_fsdev_aio.a
00:02:42.140    SO libspdk_fsdev_aio.so.1.0
00:02:42.398    LIB libspdk_sock_posix.a
00:02:42.398    SYMLINK libspdk_fsdev_aio.so
00:02:42.398    SO libspdk_sock_posix.so.6.0
00:02:42.398    LIB libspdk_blobfs_bdev.a
00:02:42.398    SYMLINK libspdk_sock_posix.so
00:02:42.398    SO libspdk_blobfs_bdev.so.6.0
00:02:42.398    LIB libspdk_bdev_split.a
00:02:42.398    SO libspdk_bdev_split.so.6.0
00:02:42.398    LIB libspdk_bdev_iscsi.a
00:02:42.398    LIB libspdk_bdev_gpt.a
00:02:42.398    LIB libspdk_bdev_error.a
00:02:42.398    SYMLINK libspdk_blobfs_bdev.so
00:02:42.398    SO libspdk_bdev_gpt.so.6.0
00:02:42.398    LIB libspdk_bdev_ftl.a
00:02:42.398    SO libspdk_bdev_iscsi.so.6.0
00:02:42.398    SO libspdk_bdev_error.so.6.0
00:02:42.398    LIB libspdk_bdev_null.a
00:02:42.398    SYMLINK libspdk_bdev_split.so
00:02:42.398    LIB libspdk_bdev_passthru.a
00:02:42.665    SO libspdk_bdev_ftl.so.6.0
00:02:42.665    SO libspdk_bdev_null.so.6.0
00:02:42.665    SO libspdk_bdev_passthru.so.6.0
00:02:42.665    SYMLINK libspdk_bdev_gpt.so
00:02:42.665    SYMLINK libspdk_bdev_iscsi.so
00:02:42.665    SYMLINK libspdk_bdev_error.so
00:02:42.665    LIB libspdk_bdev_aio.a
00:02:42.665    LIB libspdk_bdev_zone_block.a
00:02:42.665    SYMLINK libspdk_bdev_ftl.so
00:02:42.665    LIB libspdk_bdev_malloc.a
00:02:42.665    SO libspdk_bdev_aio.so.6.0
00:02:42.665    SYMLINK libspdk_bdev_null.so
00:02:42.665    SYMLINK libspdk_bdev_passthru.so
00:02:42.665    SO libspdk_bdev_zone_block.so.6.0
00:02:42.665    SO libspdk_bdev_malloc.so.6.0
00:02:42.665    LIB libspdk_bdev_delay.a
00:02:42.665    SYMLINK libspdk_bdev_aio.so
00:02:42.665    SO libspdk_bdev_delay.so.6.0
00:02:42.665    SYMLINK libspdk_bdev_zone_block.so
00:02:42.665    SYMLINK libspdk_bdev_malloc.so
00:02:42.665    SYMLINK libspdk_bdev_delay.so
00:02:42.665    LIB libspdk_bdev_lvol.a
00:02:42.665    LIB libspdk_bdev_virtio.a
00:02:42.665    SO libspdk_bdev_lvol.so.6.0
00:02:42.665    SO libspdk_bdev_virtio.so.6.0
00:02:42.923    SYMLINK libspdk_bdev_lvol.so
00:02:42.923    SYMLINK libspdk_bdev_virtio.so
00:02:43.180    LIB libspdk_bdev_raid.a
00:02:43.180    SO libspdk_bdev_raid.so.6.0
00:02:43.437    SYMLINK libspdk_bdev_raid.so
00:02:44.815    LIB libspdk_bdev_nvme.a
00:02:44.815    SO libspdk_bdev_nvme.so.7.1
00:02:44.815    SYMLINK libspdk_bdev_nvme.so
00:02:45.383    CC module/event/subsystems/iobuf/iobuf.o
00:02:45.383    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:02:45.383    CC module/event/subsystems/vmd/vmd.o
00:02:45.383    CC module/event/subsystems/keyring/keyring.o
00:02:45.383    CC module/event/subsystems/vmd/vmd_rpc.o
00:02:45.383    CC module/event/subsystems/fsdev/fsdev.o
00:02:45.383    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:02:45.383    CC module/event/subsystems/scheduler/scheduler.o
00:02:45.383    CC module/event/subsystems/sock/sock.o
00:02:45.383    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:02:45.383    LIB libspdk_event_keyring.a
00:02:45.383    LIB libspdk_event_vhost_blk.a
00:02:45.383    LIB libspdk_event_vmd.a
00:02:45.383    LIB libspdk_event_fsdev.a
00:02:45.383    LIB libspdk_event_vfu_tgt.a
00:02:45.383    LIB libspdk_event_scheduler.a
00:02:45.383    LIB libspdk_event_sock.a
00:02:45.383    SO libspdk_event_keyring.so.1.0
00:02:45.383    LIB libspdk_event_iobuf.a
00:02:45.383    SO libspdk_event_vhost_blk.so.3.0
00:02:45.383    SO libspdk_event_fsdev.so.1.0
00:02:45.383    SO libspdk_event_vfu_tgt.so.3.0
00:02:45.383    SO libspdk_event_vmd.so.6.0
00:02:45.383    SO libspdk_event_sock.so.5.0
00:02:45.383    SO libspdk_event_scheduler.so.4.0
00:02:45.383    SO libspdk_event_iobuf.so.3.0
00:02:45.383    SYMLINK libspdk_event_keyring.so
00:02:45.642    SYMLINK libspdk_event_vhost_blk.so
00:02:45.642    SYMLINK libspdk_event_fsdev.so
00:02:45.642    SYMLINK libspdk_event_vfu_tgt.so
00:02:45.642    SYMLINK libspdk_event_sock.so
00:02:45.642    SYMLINK libspdk_event_scheduler.so
00:02:45.642    SYMLINK libspdk_event_vmd.so
00:02:45.642    SYMLINK libspdk_event_iobuf.so
00:02:45.642    CC module/event/subsystems/accel/accel.o
00:02:45.901    LIB libspdk_event_accel.a
00:02:45.901    SO libspdk_event_accel.so.6.0
00:02:45.901    SYMLINK libspdk_event_accel.so
00:02:46.162    CC module/event/subsystems/bdev/bdev.o
00:02:46.162    LIB libspdk_event_bdev.a
00:02:46.421    SO libspdk_event_bdev.so.6.0
00:02:46.421    SYMLINK libspdk_event_bdev.so
00:02:46.421    CC module/event/subsystems/nbd/nbd.o
00:02:46.421    CC module/event/subsystems/scsi/scsi.o
00:02:46.421    CC module/event/subsystems/ublk/ublk.o
00:02:46.421    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:02:46.421    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:02:46.679    LIB libspdk_event_nbd.a
00:02:46.679    LIB libspdk_event_ublk.a
00:02:46.679    LIB libspdk_event_scsi.a
00:02:46.679    SO libspdk_event_nbd.so.6.0
00:02:46.679    SO libspdk_event_ublk.so.3.0
00:02:46.679    SO libspdk_event_scsi.so.6.0
00:02:46.679    SYMLINK libspdk_event_nbd.so
00:02:46.679    SYMLINK libspdk_event_ublk.so
00:02:46.679    SYMLINK libspdk_event_scsi.so
00:02:46.679    LIB libspdk_event_nvmf.a
00:02:46.679    SO libspdk_event_nvmf.so.6.0
00:02:46.937    SYMLINK libspdk_event_nvmf.so
00:02:46.937    CC module/event/subsystems/iscsi/iscsi.o
00:02:46.937    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:02:47.195    LIB libspdk_event_vhost_scsi.a
00:02:47.195    LIB libspdk_event_iscsi.a
00:02:47.195    SO libspdk_event_vhost_scsi.so.3.0
00:02:47.195    SO libspdk_event_iscsi.so.6.0
00:02:47.195    SYMLINK libspdk_event_vhost_scsi.so
00:02:47.195    SYMLINK libspdk_event_iscsi.so
00:02:47.195    SO libspdk.so.6.0
00:02:47.195    SYMLINK libspdk.so
00:02:47.458    CC app/trace_record/trace_record.o
00:02:47.458    CXX app/trace/trace.o
00:02:47.458    CC app/spdk_lspci/spdk_lspci.o
00:02:47.458    CC test/rpc_client/rpc_client_test.o
00:02:47.458    CC app/spdk_nvme_discover/discovery_aer.o
00:02:47.458    CC app/spdk_top/spdk_top.o
00:02:47.458    CC app/spdk_nvme_identify/identify.o
00:02:47.458    CC app/spdk_nvme_perf/perf.o
00:02:47.458    TEST_HEADER include/spdk/accel.h
00:02:47.458    TEST_HEADER include/spdk/accel_module.h
00:02:47.458    TEST_HEADER include/spdk/assert.h
00:02:47.458    TEST_HEADER include/spdk/barrier.h
00:02:47.458    TEST_HEADER include/spdk/base64.h
00:02:47.458    TEST_HEADER include/spdk/bdev.h
00:02:47.458    TEST_HEADER include/spdk/bdev_module.h
00:02:47.458    TEST_HEADER include/spdk/bdev_zone.h
00:02:47.458    TEST_HEADER include/spdk/bit_array.h
00:02:47.458    TEST_HEADER include/spdk/bit_pool.h
00:02:47.458    TEST_HEADER include/spdk/blob_bdev.h
00:02:47.458    TEST_HEADER include/spdk/blobfs_bdev.h
00:02:47.458    TEST_HEADER include/spdk/blobfs.h
00:02:47.459    TEST_HEADER include/spdk/blob.h
00:02:47.459    TEST_HEADER include/spdk/conf.h
00:02:47.459    TEST_HEADER include/spdk/cpuset.h
00:02:47.459    TEST_HEADER include/spdk/config.h
00:02:47.459    TEST_HEADER include/spdk/crc16.h
00:02:47.459    TEST_HEADER include/spdk/crc32.h
00:02:47.459    TEST_HEADER include/spdk/crc64.h
00:02:47.459    TEST_HEADER include/spdk/dif.h
00:02:47.459    TEST_HEADER include/spdk/dma.h
00:02:47.459    TEST_HEADER include/spdk/endian.h
00:02:47.459    TEST_HEADER include/spdk/env_dpdk.h
00:02:47.459    TEST_HEADER include/spdk/env.h
00:02:47.459    TEST_HEADER include/spdk/event.h
00:02:47.459    TEST_HEADER include/spdk/fd_group.h
00:02:47.459    TEST_HEADER include/spdk/fd.h
00:02:47.459    TEST_HEADER include/spdk/file.h
00:02:47.459    TEST_HEADER include/spdk/fsdev_module.h
00:02:47.459    TEST_HEADER include/spdk/fsdev.h
00:02:47.459    TEST_HEADER include/spdk/ftl.h
00:02:47.459    TEST_HEADER include/spdk/fuse_dispatcher.h
00:02:47.459    TEST_HEADER include/spdk/gpt_spec.h
00:02:47.459    TEST_HEADER include/spdk/hexlify.h
00:02:47.459    TEST_HEADER include/spdk/histogram_data.h
00:02:47.459    TEST_HEADER include/spdk/idxd.h
00:02:47.459    TEST_HEADER include/spdk/idxd_spec.h
00:02:47.459    TEST_HEADER include/spdk/init.h
00:02:47.459    TEST_HEADER include/spdk/ioat.h
00:02:47.459    TEST_HEADER include/spdk/iscsi_spec.h
00:02:47.459    TEST_HEADER include/spdk/ioat_spec.h
00:02:47.459    TEST_HEADER include/spdk/json.h
00:02:47.459    TEST_HEADER include/spdk/jsonrpc.h
00:02:47.459    TEST_HEADER include/spdk/keyring_module.h
00:02:47.459    TEST_HEADER include/spdk/keyring.h
00:02:47.459    TEST_HEADER include/spdk/likely.h
00:02:47.459    TEST_HEADER include/spdk/log.h
00:02:47.459    TEST_HEADER include/spdk/md5.h
00:02:47.459    TEST_HEADER include/spdk/lvol.h
00:02:47.459    TEST_HEADER include/spdk/memory.h
00:02:47.459    TEST_HEADER include/spdk/mmio.h
00:02:47.459    TEST_HEADER include/spdk/nbd.h
00:02:47.459    TEST_HEADER include/spdk/net.h
00:02:47.459    TEST_HEADER include/spdk/notify.h
00:02:47.459    TEST_HEADER include/spdk/nvme.h
00:02:47.459    TEST_HEADER include/spdk/nvme_ocssd.h
00:02:47.459    TEST_HEADER include/spdk/nvme_intel.h
00:02:47.459    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:02:47.459    TEST_HEADER include/spdk/nvme_zns.h
00:02:47.459    TEST_HEADER include/spdk/nvme_spec.h
00:02:47.459    TEST_HEADER include/spdk/nvmf_cmd.h
00:02:47.459    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:02:47.459    TEST_HEADER include/spdk/nvmf.h
00:02:47.459    TEST_HEADER include/spdk/nvmf_spec.h
00:02:47.459    TEST_HEADER include/spdk/nvmf_transport.h
00:02:47.459    TEST_HEADER include/spdk/opal.h
00:02:47.459    TEST_HEADER include/spdk/pci_ids.h
00:02:47.459    TEST_HEADER include/spdk/opal_spec.h
00:02:47.459    TEST_HEADER include/spdk/pipe.h
00:02:47.459    TEST_HEADER include/spdk/queue.h
00:02:47.459    TEST_HEADER include/spdk/reduce.h
00:02:47.459    TEST_HEADER include/spdk/scheduler.h
00:02:47.459    TEST_HEADER include/spdk/rpc.h
00:02:47.459    TEST_HEADER include/spdk/scsi.h
00:02:47.459    TEST_HEADER include/spdk/scsi_spec.h
00:02:47.459    TEST_HEADER include/spdk/sock.h
00:02:47.459    TEST_HEADER include/spdk/stdinc.h
00:02:47.459    CC examples/interrupt_tgt/interrupt_tgt.o
00:02:47.459    TEST_HEADER include/spdk/string.h
00:02:47.459    TEST_HEADER include/spdk/thread.h
00:02:47.459    TEST_HEADER include/spdk/trace.h
00:02:47.459    TEST_HEADER include/spdk/trace_parser.h
00:02:47.459    TEST_HEADER include/spdk/ublk.h
00:02:47.459    TEST_HEADER include/spdk/tree.h
00:02:47.459    TEST_HEADER include/spdk/util.h
00:02:47.459    TEST_HEADER include/spdk/uuid.h
00:02:47.459    TEST_HEADER include/spdk/version.h
00:02:47.459    TEST_HEADER include/spdk/vfio_user_pci.h
00:02:47.459    CC app/spdk_dd/spdk_dd.o
00:02:47.459    TEST_HEADER include/spdk/vfio_user_spec.h
00:02:47.459    TEST_HEADER include/spdk/vhost.h
00:02:47.459    TEST_HEADER include/spdk/xor.h
00:02:47.459    TEST_HEADER include/spdk/vmd.h
00:02:47.459    TEST_HEADER include/spdk/zipf.h
00:02:47.459    CXX test/cpp_headers/accel.o
00:02:47.459    CXX test/cpp_headers/accel_module.o
00:02:47.459    CXX test/cpp_headers/assert.o
00:02:47.459    CXX test/cpp_headers/barrier.o
00:02:47.459    CXX test/cpp_headers/base64.o
00:02:47.459    CXX test/cpp_headers/bdev.o
00:02:47.459    CXX test/cpp_headers/bdev_module.o
00:02:47.459    CXX test/cpp_headers/bdev_zone.o
00:02:47.459    CXX test/cpp_headers/bit_array.o
00:02:47.459    CXX test/cpp_headers/bit_pool.o
00:02:47.459    CXX test/cpp_headers/blob_bdev.o
00:02:47.459    CXX test/cpp_headers/blobfs_bdev.o
00:02:47.459    CXX test/cpp_headers/blobfs.o
00:02:47.459    CXX test/cpp_headers/blob.o
00:02:47.459    CC app/iscsi_tgt/iscsi_tgt.o
00:02:47.459    CXX test/cpp_headers/conf.o
00:02:47.459    CXX test/cpp_headers/config.o
00:02:47.459    CXX test/cpp_headers/cpuset.o
00:02:47.459    CXX test/cpp_headers/crc16.o
00:02:47.459    CC app/nvmf_tgt/nvmf_main.o
00:02:47.726    CXX test/cpp_headers/crc32.o
00:02:47.726    CC examples/ioat/verify/verify.o
00:02:47.726    CC app/spdk_tgt/spdk_tgt.o
00:02:47.726    CC test/app/histogram_perf/histogram_perf.o
00:02:47.726    CC examples/util/zipf/zipf.o
00:02:47.726    CC examples/ioat/perf/perf.o
00:02:47.726    CC test/env/vtophys/vtophys.o
00:02:47.726    CC test/app/jsoncat/jsoncat.o
00:02:47.726    CC test/app/stub/stub.o
00:02:47.726    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:02:47.726    CC test/env/pci/pci_ut.o
00:02:47.726    CC test/env/memory/memory_ut.o
00:02:47.726    CC test/thread/poller_perf/poller_perf.o
00:02:47.726    CC app/fio/nvme/fio_plugin.o
00:02:47.726    CC test/dma/test_dma/test_dma.o
00:02:47.726    CC test/app/bdev_svc/bdev_svc.o
00:02:47.726    CC app/fio/bdev/fio_plugin.o
00:02:47.726    LINK spdk_lspci
00:02:47.726    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:02:47.987    CC test/env/mem_callbacks/mem_callbacks.o
00:02:47.987    LINK rpc_client_test
00:02:47.987    LINK spdk_nvme_discover
00:02:47.987    LINK jsoncat
00:02:47.987    LINK histogram_perf
00:02:47.987    LINK interrupt_tgt
00:02:47.987    LINK zipf
00:02:47.987    LINK poller_perf
00:02:47.987    LINK vtophys
00:02:47.987    CXX test/cpp_headers/crc64.o
00:02:47.987    CXX test/cpp_headers/dif.o
00:02:47.987    CXX test/cpp_headers/dma.o
00:02:47.987    CXX test/cpp_headers/endian.o
00:02:47.987    CXX test/cpp_headers/env_dpdk.o
00:02:47.987    CXX test/cpp_headers/env.o
00:02:47.987    LINK nvmf_tgt
00:02:47.987    CXX test/cpp_headers/event.o
00:02:47.987    CXX test/cpp_headers/fd_group.o
00:02:47.987    CXX test/cpp_headers/fd.o
00:02:47.987    CXX test/cpp_headers/file.o
00:02:47.987    LINK env_dpdk_post_init
00:02:47.987    CXX test/cpp_headers/fsdev.o
00:02:47.987    LINK iscsi_tgt
00:02:47.987    LINK spdk_trace_record
00:02:47.987    LINK stub
00:02:47.987    CXX test/cpp_headers/fsdev_module.o
00:02:47.987    CXX test/cpp_headers/ftl.o
00:02:47.987    CXX test/cpp_headers/fuse_dispatcher.o
00:02:47.987    LINK verify
00:02:47.987    CXX test/cpp_headers/gpt_spec.o
00:02:47.987    LINK ioat_perf
00:02:47.987    LINK bdev_svc
00:02:48.252    LINK spdk_tgt
00:02:48.252    CXX test/cpp_headers/hexlify.o
00:02:48.252    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:02:48.252    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:02:48.252    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:02:48.252    CXX test/cpp_headers/histogram_data.o
00:02:48.252    CXX test/cpp_headers/idxd.o
00:02:48.252    CXX test/cpp_headers/idxd_spec.o
00:02:48.252    CXX test/cpp_headers/init.o
00:02:48.252    LINK spdk_dd
00:02:48.252    CXX test/cpp_headers/ioat.o
00:02:48.252    CXX test/cpp_headers/ioat_spec.o
00:02:48.252    CXX test/cpp_headers/iscsi_spec.o
00:02:48.516    CXX test/cpp_headers/json.o
00:02:48.516    LINK spdk_trace
00:02:48.516    CXX test/cpp_headers/jsonrpc.o
00:02:48.516    CXX test/cpp_headers/keyring.o
00:02:48.516    CXX test/cpp_headers/keyring_module.o
00:02:48.516    CXX test/cpp_headers/likely.o
00:02:48.516    CXX test/cpp_headers/log.o
00:02:48.516    CXX test/cpp_headers/lvol.o
00:02:48.516    CXX test/cpp_headers/md5.o
00:02:48.516    CXX test/cpp_headers/memory.o
00:02:48.516    CXX test/cpp_headers/mmio.o
00:02:48.516    CXX test/cpp_headers/nbd.o
00:02:48.516    CXX test/cpp_headers/net.o
00:02:48.516    CXX test/cpp_headers/notify.o
00:02:48.516    CXX test/cpp_headers/nvme.o
00:02:48.516    CXX test/cpp_headers/nvme_intel.o
00:02:48.516    CXX test/cpp_headers/nvme_ocssd.o
00:02:48.516    CXX test/cpp_headers/nvme_ocssd_spec.o
00:02:48.516    LINK pci_ut
00:02:48.516    CXX test/cpp_headers/nvme_spec.o
00:02:48.516    CXX test/cpp_headers/nvme_zns.o
00:02:48.516    CXX test/cpp_headers/nvmf_cmd.o
00:02:48.779    CC test/event/event_perf/event_perf.o
00:02:48.779    CXX test/cpp_headers/nvmf_fc_spec.o
00:02:48.779    CC test/event/reactor_perf/reactor_perf.o
00:02:48.779    LINK nvme_fuzz
00:02:48.779    CC test/event/reactor/reactor.o
00:02:48.779    CXX test/cpp_headers/nvmf.o
00:02:48.779    CXX test/cpp_headers/nvmf_spec.o
00:02:48.779    CXX test/cpp_headers/nvmf_transport.o
00:02:48.779    CXX test/cpp_headers/opal.o
00:02:48.779    CXX test/cpp_headers/opal_spec.o
00:02:48.779    LINK test_dma
00:02:48.779    CC examples/sock/hello_world/hello_sock.o
00:02:48.779    CC test/event/app_repeat/app_repeat.o
00:02:48.779    CC examples/idxd/perf/perf.o
00:02:48.779    CC examples/vmd/lsvmd/lsvmd.o
00:02:48.779    CC examples/thread/thread/thread_ex.o
00:02:48.779    CXX test/cpp_headers/pci_ids.o
00:02:48.779    CC test/event/scheduler/scheduler.o
00:02:48.779    CXX test/cpp_headers/pipe.o
00:02:48.779    CC examples/vmd/led/led.o
00:02:48.779    CXX test/cpp_headers/queue.o
00:02:48.779    CXX test/cpp_headers/reduce.o
00:02:48.779    CXX test/cpp_headers/rpc.o
00:02:48.779    CXX test/cpp_headers/scheduler.o
00:02:48.779    CXX test/cpp_headers/scsi.o
00:02:48.779    CXX test/cpp_headers/scsi_spec.o
00:02:48.779    CXX test/cpp_headers/sock.o
00:02:48.779    CXX test/cpp_headers/stdinc.o
00:02:48.779    CXX test/cpp_headers/string.o
00:02:48.779    CXX test/cpp_headers/thread.o
00:02:48.779    CXX test/cpp_headers/trace.o
00:02:49.040    LINK spdk_bdev
00:02:49.040    CXX test/cpp_headers/trace_parser.o
00:02:49.040    CXX test/cpp_headers/tree.o
00:02:49.040    CXX test/cpp_headers/ublk.o
00:02:49.040    CXX test/cpp_headers/util.o
00:02:49.040    CXX test/cpp_headers/uuid.o
00:02:49.040    LINK spdk_nvme
00:02:49.040    LINK reactor_perf
00:02:49.040    LINK reactor
00:02:49.040    CXX test/cpp_headers/version.o
00:02:49.040    LINK event_perf
00:02:49.040    LINK vhost_fuzz
00:02:49.040    CXX test/cpp_headers/vfio_user_pci.o
00:02:49.040    LINK lsvmd
00:02:49.040    CXX test/cpp_headers/vfio_user_spec.o
00:02:49.040    CXX test/cpp_headers/vhost.o
00:02:49.040    CC app/vhost/vhost.o
00:02:49.040    LINK spdk_nvme_perf
00:02:49.040    CXX test/cpp_headers/vmd.o
00:02:49.040    LINK app_repeat
00:02:49.040    CXX test/cpp_headers/xor.o
00:02:49.040    LINK mem_callbacks
00:02:49.040    CXX test/cpp_headers/zipf.o
00:02:49.040    LINK spdk_nvme_identify
00:02:49.040    LINK led
00:02:49.302    LINK hello_sock
00:02:49.302    LINK scheduler
00:02:49.302    LINK spdk_top
00:02:49.302    LINK thread
00:02:49.302    CC test/nvme/e2edp/nvme_dp.o
00:02:49.302    LINK idxd_perf
00:02:49.302    CC test/nvme/reset/reset.o
00:02:49.302    CC test/nvme/aer/aer.o
00:02:49.302    CC test/nvme/overhead/overhead.o
00:02:49.302    CC test/nvme/connect_stress/connect_stress.o
00:02:49.302    CC test/nvme/sgl/sgl.o
00:02:49.302    CC test/nvme/reserve/reserve.o
00:02:49.302    CC test/nvme/startup/startup.o
00:02:49.302    CC test/nvme/simple_copy/simple_copy.o
00:02:49.302    CC test/nvme/err_injection/err_injection.o
00:02:49.302    CC test/nvme/compliance/nvme_compliance.o
00:02:49.302    CC test/nvme/fused_ordering/fused_ordering.o
00:02:49.302    CC test/nvme/boot_partition/boot_partition.o
00:02:49.302    CC test/nvme/cuse/cuse.o
00:02:49.302    CC test/nvme/doorbell_aers/doorbell_aers.o
00:02:49.302    CC test/nvme/fdp/fdp.o
00:02:49.563    CC test/blobfs/mkfs/mkfs.o
00:02:49.563    LINK vhost
00:02:49.563    CC test/accel/dif/dif.o
00:02:49.563    CC test/lvol/esnap/esnap.o
00:02:49.563    LINK startup
00:02:49.563    CC examples/nvme/hotplug/hotplug.o
00:02:49.563    CC examples/nvme/hello_world/hello_world.o
00:02:49.563    LINK err_injection
00:02:49.563    CC examples/nvme/cmb_copy/cmb_copy.o
00:02:49.563    CC examples/nvme/reconnect/reconnect.o
00:02:49.563    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:02:49.563    CC examples/nvme/abort/abort.o
00:02:49.563    LINK connect_stress
00:02:49.563    CC examples/nvme/nvme_manage/nvme_manage.o
00:02:49.563    CC examples/nvme/arbitration/arbitration.o
00:02:49.563    LINK fused_ordering
00:02:49.563    LINK mkfs
00:02:49.824    LINK boot_partition
00:02:49.824    LINK simple_copy
00:02:49.824    LINK reserve
00:02:49.824    LINK reset
00:02:49.824    LINK aer
00:02:49.824    LINK doorbell_aers
00:02:49.824    LINK memory_ut
00:02:49.824    CC examples/accel/perf/accel_perf.o
00:02:49.824    LINK overhead
00:02:49.824    LINK fdp
00:02:49.824    LINK nvme_compliance
00:02:49.824    CC examples/fsdev/hello_world/hello_fsdev.o
00:02:49.824    CC examples/blob/cli/blobcli.o
00:02:49.824    LINK sgl
00:02:49.824    LINK nvme_dp
00:02:49.824    CC examples/blob/hello_world/hello_blob.o
00:02:50.084    LINK hello_world
00:02:50.084    LINK hotplug
00:02:50.084    LINK pmr_persistence
00:02:50.084    LINK cmb_copy
00:02:50.084    LINK reconnect
00:02:50.084    LINK arbitration
00:02:50.084    LINK hello_blob
00:02:50.365    LINK abort
00:02:50.366    LINK dif
00:02:50.366    LINK hello_fsdev
00:02:50.366    LINK accel_perf
00:02:50.366    LINK nvme_manage
00:02:50.366    LINK blobcli
00:02:50.624    CC test/bdev/bdevio/bdevio.o
00:02:50.624    LINK iscsi_fuzz
00:02:50.624    CC examples/bdev/hello_world/hello_bdev.o
00:02:50.883    CC examples/bdev/bdevperf/bdevperf.o
00:02:50.883    LINK cuse
00:02:50.883    LINK hello_bdev
00:02:51.141    LINK bdevio
00:02:51.708    LINK bdevperf
00:02:51.967    CC examples/nvmf/nvmf/nvmf.o
00:02:52.225    LINK nvmf
00:02:54.804    LINK esnap
00:02:55.062  
00:02:55.062  real	1m10.490s
00:02:55.062  user	11m50.646s
00:02:55.062  sys	2m39.594s
00:02:55.062   03:53:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:55.062   03:53:23 make -- common/autotest_common.sh@10 -- $ set +x
00:02:55.062  ************************************
00:02:55.062  END TEST make
00:02:55.062  ************************************
00:02:55.062   03:53:23  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:02:55.062   03:53:23  -- pm/common@29 -- $ signal_monitor_resources TERM
00:02:55.062   03:53:23  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:02:55.062   03:53:23  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.062   03:53:23  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:02:55.062   03:53:23  -- pm/common@44 -- $ pid=30132
00:02:55.062   03:53:23  -- pm/common@50 -- $ kill -TERM 30132
00:02:55.062   03:53:23  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.062   03:53:23  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:02:55.062   03:53:23  -- pm/common@44 -- $ pid=30134
00:02:55.062   03:53:23  -- pm/common@50 -- $ kill -TERM 30134
00:02:55.062   03:53:23  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.062   03:53:23  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:02:55.062   03:53:23  -- pm/common@44 -- $ pid=30135
00:02:55.062   03:53:23  -- pm/common@50 -- $ kill -TERM 30135
00:02:55.062   03:53:23  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.062   03:53:23  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:02:55.062   03:53:23  -- pm/common@44 -- $ pid=30166
00:02:55.062   03:53:23  -- pm/common@50 -- $ sudo -E kill -TERM 30166
00:02:55.062   03:53:23  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:02:55.062   03:53:23  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:02:55.321    03:53:23  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:02:55.321     03:53:23  -- common/autotest_common.sh@1711 -- # lcov --version
00:02:55.321     03:53:23  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:02:55.321    03:53:23  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:02:55.321    03:53:23  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:02:55.321    03:53:23  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:02:55.321    03:53:23  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:02:55.321    03:53:23  -- scripts/common.sh@336 -- # IFS=.-:
00:02:55.321    03:53:23  -- scripts/common.sh@336 -- # read -ra ver1
00:02:55.321    03:53:23  -- scripts/common.sh@337 -- # IFS=.-:
00:02:55.321    03:53:23  -- scripts/common.sh@337 -- # read -ra ver2
00:02:55.321    03:53:23  -- scripts/common.sh@338 -- # local 'op=<'
00:02:55.321    03:53:23  -- scripts/common.sh@340 -- # ver1_l=2
00:02:55.321    03:53:23  -- scripts/common.sh@341 -- # ver2_l=1
00:02:55.321    03:53:23  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:02:55.321    03:53:23  -- scripts/common.sh@344 -- # case "$op" in
00:02:55.321    03:53:23  -- scripts/common.sh@345 -- # : 1
00:02:55.321    03:53:23  -- scripts/common.sh@364 -- # (( v = 0 ))
00:02:55.321    03:53:23  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:55.321     03:53:23  -- scripts/common.sh@365 -- # decimal 1
00:02:55.321     03:53:23  -- scripts/common.sh@353 -- # local d=1
00:02:55.321     03:53:23  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:02:55.321     03:53:23  -- scripts/common.sh@355 -- # echo 1
00:02:55.321    03:53:23  -- scripts/common.sh@365 -- # ver1[v]=1
00:02:55.321     03:53:23  -- scripts/common.sh@366 -- # decimal 2
00:02:55.321     03:53:23  -- scripts/common.sh@353 -- # local d=2
00:02:55.321     03:53:23  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:02:55.321     03:53:23  -- scripts/common.sh@355 -- # echo 2
00:02:55.322    03:53:23  -- scripts/common.sh@366 -- # ver2[v]=2
00:02:55.322    03:53:23  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:02:55.322    03:53:23  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:02:55.322    03:53:23  -- scripts/common.sh@368 -- # return 0
00:02:55.322    03:53:23  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:02:55.322    03:53:23  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:02:55.322  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:55.322  		--rc genhtml_branch_coverage=1
00:02:55.322  		--rc genhtml_function_coverage=1
00:02:55.322  		--rc genhtml_legend=1
00:02:55.322  		--rc geninfo_all_blocks=1
00:02:55.322  		--rc geninfo_unexecuted_blocks=1
00:02:55.322  		
00:02:55.322  		'
00:02:55.322    03:53:23  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:02:55.322  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:55.322  		--rc genhtml_branch_coverage=1
00:02:55.322  		--rc genhtml_function_coverage=1
00:02:55.322  		--rc genhtml_legend=1
00:02:55.322  		--rc geninfo_all_blocks=1
00:02:55.322  		--rc geninfo_unexecuted_blocks=1
00:02:55.322  		
00:02:55.322  		'
00:02:55.322    03:53:23  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:02:55.322  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:55.322  		--rc genhtml_branch_coverage=1
00:02:55.322  		--rc genhtml_function_coverage=1
00:02:55.322  		--rc genhtml_legend=1
00:02:55.322  		--rc geninfo_all_blocks=1
00:02:55.322  		--rc geninfo_unexecuted_blocks=1
00:02:55.322  		
00:02:55.322  		'
00:02:55.322    03:53:23  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:02:55.322  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:55.322  		--rc genhtml_branch_coverage=1
00:02:55.322  		--rc genhtml_function_coverage=1
00:02:55.322  		--rc genhtml_legend=1
00:02:55.322  		--rc geninfo_all_blocks=1
00:02:55.322  		--rc geninfo_unexecuted_blocks=1
00:02:55.322  		
00:02:55.322  		'
00:02:55.322   03:53:23  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:02:55.322     03:53:23  -- nvmf/common.sh@7 -- # uname -s
00:02:55.322    03:53:23  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:02:55.322    03:53:23  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:02:55.322    03:53:23  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:02:55.322    03:53:23  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:02:55.322    03:53:23  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:02:55.322    03:53:23  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:02:55.322    03:53:23  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:02:55.322    03:53:23  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:02:55.322    03:53:23  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:02:55.322     03:53:23  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:02:55.322    03:53:23  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:02:55.322    03:53:23  -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:02:55.322    03:53:23  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:02:55.322    03:53:23  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:02:55.322    03:53:23  -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:02:55.322    03:53:23  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:02:55.322    03:53:23  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:02:55.322     03:53:23  -- scripts/common.sh@15 -- # shopt -s extglob
00:02:55.322     03:53:23  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:02:55.322     03:53:23  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:55.322     03:53:23  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:55.322      03:53:23  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:55.322      03:53:23  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:55.322      03:53:23  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:55.322      03:53:23  -- paths/export.sh@5 -- # export PATH
00:02:55.322      03:53:23  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:55.322    03:53:23  -- nvmf/common.sh@51 -- # : 0
00:02:55.322    03:53:23  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:02:55.322    03:53:23  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:02:55.322    03:53:23  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:02:55.322    03:53:23  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:02:55.322    03:53:23  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:02:55.322    03:53:23  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:02:55.322  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:02:55.322    03:53:23  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:02:55.322    03:53:23  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:02:55.322    03:53:23  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:02:55.322   03:53:23  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:02:55.322    03:53:23  -- spdk/autotest.sh@32 -- # uname -s
00:02:55.322   03:53:23  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:02:55.322   03:53:23  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:02:55.322   03:53:23  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps
00:02:55.322   03:53:23  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:02:55.322   03:53:23  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps
00:02:55.322   03:53:23  -- spdk/autotest.sh@44 -- # modprobe nbd
00:02:55.322    03:53:23  -- spdk/autotest.sh@46 -- # type -P udevadm
00:02:55.322   03:53:23  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:02:55.322   03:53:23  -- spdk/autotest.sh@48 -- # udevadm_pid=90236
00:02:55.322   03:53:23  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:02:55.322   03:53:23  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:02:55.322   03:53:23  -- pm/common@17 -- # local monitor
00:02:55.322   03:53:23  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.322   03:53:23  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.322   03:53:23  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.322    03:53:23  -- pm/common@21 -- # date +%s
00:02:55.322   03:53:23  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:55.322    03:53:23  -- pm/common@21 -- # date +%s
00:02:55.322   03:53:23  -- pm/common@25 -- # sleep 1
00:02:55.322    03:53:23  -- pm/common@21 -- # date +%s
00:02:55.322    03:53:23  -- pm/common@21 -- # date +%s
00:02:55.322   03:53:23  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803
00:02:55.322   03:53:23  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803
00:02:55.322   03:53:23  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803
00:02:55.322   03:53:23  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803
00:02:55.583  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-vmstat.pm.log
00:02:55.583  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-cpu-load.pm.log
00:02:55.583  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-cpu-temp.pm.log
00:02:55.583  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-bmc-pm.bmc.pm.log
00:02:56.524   03:53:24  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:02:56.524   03:53:24  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:02:56.524   03:53:24  -- common/autotest_common.sh@726 -- # xtrace_disable
00:02:56.524   03:53:24  -- common/autotest_common.sh@10 -- # set +x
00:02:56.524   03:53:24  -- spdk/autotest.sh@59 -- # create_test_list
00:02:56.524   03:53:24  -- common/autotest_common.sh@752 -- # xtrace_disable
00:02:56.524   03:53:24  -- common/autotest_common.sh@10 -- # set +x
00:02:56.524     03:53:24  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh
00:02:56.524    03:53:24  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:02:56.524   03:53:24  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:02:56.524   03:53:24  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output
00:02:56.524   03:53:24  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:02:56.524   03:53:24  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:02:56.524    03:53:24  -- common/autotest_common.sh@1457 -- # uname
00:02:56.524   03:53:24  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:02:56.524   03:53:24  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:02:56.524    03:53:24  -- common/autotest_common.sh@1477 -- # uname
00:02:56.524   03:53:24  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:02:56.524   03:53:24  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:02:56.524   03:53:24  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:02:56.524  lcov: LCOV version 1.15
00:02:56.524   03:53:25  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info
00:03:14.613  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:03:14.613  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:03:36.528   03:54:02  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:03:36.528   03:54:02  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:36.528   03:54:02  -- common/autotest_common.sh@10 -- # set +x
00:03:36.528   03:54:02  -- spdk/autotest.sh@78 -- # rm -f
00:03:36.528   03:54:02  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:03:36.528  0000:88:00.0 (8086 0a54): Already using the nvme driver
00:03:36.528  0000:00:04.7 (8086 0e27): Already using the ioatdma driver
00:03:36.528  0000:00:04.6 (8086 0e26): Already using the ioatdma driver
00:03:36.528  0000:00:04.5 (8086 0e25): Already using the ioatdma driver
00:03:36.528  0000:00:04.4 (8086 0e24): Already using the ioatdma driver
00:03:36.528  0000:00:04.3 (8086 0e23): Already using the ioatdma driver
00:03:36.528  0000:00:04.2 (8086 0e22): Already using the ioatdma driver
00:03:36.528  0000:00:04.1 (8086 0e21): Already using the ioatdma driver
00:03:36.528  0000:00:04.0 (8086 0e20): Already using the ioatdma driver
00:03:36.528  0000:80:04.7 (8086 0e27): Already using the ioatdma driver
00:03:36.528  0000:80:04.6 (8086 0e26): Already using the ioatdma driver
00:03:36.528  0000:80:04.5 (8086 0e25): Already using the ioatdma driver
00:03:36.528  0000:80:04.4 (8086 0e24): Already using the ioatdma driver
00:03:36.528  0000:80:04.3 (8086 0e23): Already using the ioatdma driver
00:03:36.528  0000:80:04.2 (8086 0e22): Already using the ioatdma driver
00:03:36.528  0000:80:04.1 (8086 0e21): Already using the ioatdma driver
00:03:36.528  0000:80:04.0 (8086 0e20): Already using the ioatdma driver
00:03:36.528   03:54:03  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:03:36.528   03:54:03  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:03:36.528   03:54:03  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:03:36.528   03:54:03  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:03:36.528   03:54:03  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:03:36.528   03:54:03  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:03:36.528   03:54:03  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:03:36.528   03:54:03  -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0
00:03:36.528   03:54:03  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:36.528   03:54:03  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:03:36.528   03:54:03  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:03:36.528   03:54:03  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:36.528   03:54:03  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:36.528   03:54:03  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:03:36.528   03:54:03  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:36.528   03:54:03  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:36.528   03:54:03  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:03:36.528   03:54:03  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:03:36.528   03:54:03  -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:03:36.528  No valid GPT data, bailing
00:03:36.528    03:54:03  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:03:36.528   03:54:03  -- scripts/common.sh@394 -- # pt=
00:03:36.528   03:54:03  -- scripts/common.sh@395 -- # return 1
00:03:36.528   03:54:03  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:03:36.528  1+0 records in
00:03:36.528  1+0 records out
00:03:36.528  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00207613 s, 505 MB/s
00:03:36.528   03:54:03  -- spdk/autotest.sh@105 -- # sync
00:03:36.528   03:54:03  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:03:36.528   03:54:03  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:03:36.528    03:54:03  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:03:37.463    03:54:05  -- spdk/autotest.sh@111 -- # uname -s
00:03:37.463   03:54:05  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:03:37.463   03:54:05  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:03:37.463   03:54:05  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status
00:03:38.837  Hugepages
00:03:38.837  node     hugesize     free /  total
00:03:38.837  node0   1048576kB        0 /      0
00:03:38.837  node0      2048kB        0 /      0
00:03:38.837  node1   1048576kB        0 /      0
00:03:38.837  node1      2048kB        0 /      0
00:03:38.837  
00:03:38.837  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:38.837  I/OAT                     0000:00:04.0    8086   0e20   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:00:04.1    8086   0e21   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:00:04.2    8086   0e22   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:00:04.3    8086   0e23   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:00:04.4    8086   0e24   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:00:04.5    8086   0e25   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:00:04.6    8086   0e26   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:00:04.7    8086   0e27   0       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.0    8086   0e20   1       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.1    8086   0e21   1       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.2    8086   0e22   1       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.3    8086   0e23   1       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.4    8086   0e24   1       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.5    8086   0e25   1       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.6    8086   0e26   1       ioatdma          -          -
00:03:38.837  I/OAT                     0000:80:04.7    8086   0e27   1       ioatdma          -          -
00:03:38.837  NVMe                      0000:88:00.0    8086   0a54   1       nvme             nvme0      nvme0n1
00:03:38.837    03:54:07  -- spdk/autotest.sh@117 -- # uname -s
00:03:38.837   03:54:07  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:03:38.837   03:54:07  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:03:38.837   03:54:07  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:03:40.216  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:03:40.216  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:03:40.216  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:03:40.216  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:03:40.216  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:03:40.216  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:03:40.216  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:03:40.216  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:03:40.216  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:03:41.155  0000:88:00.0 (8086 0a54): nvme -> vfio-pci
00:03:41.415   03:54:09  -- common/autotest_common.sh@1517 -- # sleep 1
00:03:42.355   03:54:10  -- common/autotest_common.sh@1518 -- # bdfs=()
00:03:42.355   03:54:10  -- common/autotest_common.sh@1518 -- # local bdfs
00:03:42.355   03:54:10  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:03:42.355    03:54:10  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:03:42.355    03:54:10  -- common/autotest_common.sh@1498 -- # bdfs=()
00:03:42.355    03:54:10  -- common/autotest_common.sh@1498 -- # local bdfs
00:03:42.355    03:54:10  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:03:42.355     03:54:10  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:03:42.355     03:54:10  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:03:42.355    03:54:10  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:03:42.355    03:54:10  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0
00:03:42.355   03:54:10  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:03:43.734  Waiting for block devices as requested
00:03:43.734  0000:88:00.0 (8086 0a54): vfio-pci -> nvme
00:03:43.734  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:03:43.734  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:03:43.992  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:03:43.992  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:03:43.992  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:03:43.992  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:03:44.251  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:03:44.251  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:03:44.251  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:03:44.251  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:03:44.508  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:03:44.509  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:03:44.509  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:03:44.771  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:03:44.771  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:03:44.771  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:03:44.771   03:54:13  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:03:44.771    03:54:13  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0
00:03:45.029     03:54:13  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:03:45.029     03:54:13  -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme
00:03:45.029    03:54:13  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0
00:03:45.029    03:54:13  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]]
00:03:45.029     03:54:13  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0
00:03:45.029    03:54:13  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:03:45.029   03:54:13  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:03:45.029   03:54:13  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:03:45.029    03:54:13  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:03:45.029    03:54:13  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:03:45.029    03:54:13  -- common/autotest_common.sh@1531 -- # grep oacs
00:03:45.029   03:54:13  -- common/autotest_common.sh@1531 -- # oacs=' 0xf'
00:03:45.029   03:54:13  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:03:45.029   03:54:13  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:03:45.029    03:54:13  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:03:45.029    03:54:13  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:03:45.029    03:54:13  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:03:45.029   03:54:13  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:03:45.029   03:54:13  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:03:45.029   03:54:13  -- common/autotest_common.sh@1543 -- # continue
00:03:45.029   03:54:13  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:03:45.029   03:54:13  -- common/autotest_common.sh@732 -- # xtrace_disable
00:03:45.029   03:54:13  -- common/autotest_common.sh@10 -- # set +x
00:03:45.029   03:54:13  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:03:45.029   03:54:13  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:45.029   03:54:13  -- common/autotest_common.sh@10 -- # set +x
00:03:45.029   03:54:13  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:03:46.419  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:03:46.419  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:03:46.419  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:03:46.419  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:03:46.419  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:03:46.419  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:03:46.419  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:03:46.419  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:03:46.419  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:03:47.353  0000:88:00.0 (8086 0a54): nvme -> vfio-pci
00:03:47.353   03:54:15  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:03:47.353   03:54:15  -- common/autotest_common.sh@732 -- # xtrace_disable
00:03:47.353   03:54:15  -- common/autotest_common.sh@10 -- # set +x
00:03:47.353   03:54:15  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:03:47.353   03:54:15  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:03:47.353    03:54:15  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:03:47.353    03:54:15  -- common/autotest_common.sh@1563 -- # bdfs=()
00:03:47.353    03:54:15  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:03:47.353    03:54:15  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:03:47.353    03:54:15  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:03:47.353     03:54:15  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:03:47.353     03:54:15  -- common/autotest_common.sh@1498 -- # bdfs=()
00:03:47.353     03:54:15  -- common/autotest_common.sh@1498 -- # local bdfs
00:03:47.353     03:54:15  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:03:47.353      03:54:15  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:03:47.353      03:54:15  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:03:47.353     03:54:15  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:03:47.353     03:54:15  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0
00:03:47.353    03:54:15  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:03:47.353     03:54:15  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device
00:03:47.353    03:54:15  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:03:47.353    03:54:15  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:03:47.353    03:54:15  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:03:47.353    03:54:15  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:03:47.353    03:54:15  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0
00:03:47.353   03:54:15  -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]]
00:03:47.353   03:54:15  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=101286
00:03:47.353   03:54:15  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:03:47.353   03:54:15  -- common/autotest_common.sh@1585 -- # waitforlisten 101286
00:03:47.353   03:54:15  -- common/autotest_common.sh@835 -- # '[' -z 101286 ']'
00:03:47.353   03:54:15  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:03:47.353   03:54:15  -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:47.353   03:54:15  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:03:47.353  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:03:47.353   03:54:15  -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:47.353   03:54:15  -- common/autotest_common.sh@10 -- # set +x
00:03:47.613  [2024-12-09 03:54:15.962795] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:03:47.613  [2024-12-09 03:54:15.962870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101286 ]
00:03:47.613  [2024-12-09 03:54:16.029412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:47.613  [2024-12-09 03:54:16.088802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:03:47.871   03:54:16  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:03:47.871   03:54:16  -- common/autotest_common.sh@868 -- # return 0
00:03:47.871   03:54:16  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:03:47.871   03:54:16  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:03:47.871   03:54:16  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0
00:03:51.160  nvme0n1
00:03:51.160   03:54:19  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:03:51.160  [2024-12-09 03:54:19.698908] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18
00:03:51.160  [2024-12-09 03:54:19.698952] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18
00:03:51.160  request:
00:03:51.161  {
00:03:51.161    "nvme_ctrlr_name": "nvme0",
00:03:51.161    "password": "test",
00:03:51.161    "method": "bdev_nvme_opal_revert",
00:03:51.161    "req_id": 1
00:03:51.161  }
00:03:51.161  Got JSON-RPC error response
00:03:51.161  response:
00:03:51.161  {
00:03:51.161    "code": -32603,
00:03:51.161    "message": "Internal error"
00:03:51.161  }
00:03:51.161   03:54:19  -- common/autotest_common.sh@1591 -- # true
00:03:51.161   03:54:19  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:03:51.161   03:54:19  -- common/autotest_common.sh@1595 -- # killprocess 101286
00:03:51.161   03:54:19  -- common/autotest_common.sh@954 -- # '[' -z 101286 ']'
00:03:51.161   03:54:19  -- common/autotest_common.sh@958 -- # kill -0 101286
00:03:51.161    03:54:19  -- common/autotest_common.sh@959 -- # uname
00:03:51.161   03:54:19  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:51.161    03:54:19  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101286
00:03:51.419   03:54:19  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:51.419   03:54:19  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:51.419   03:54:19  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101286'
00:03:51.419  killing process with pid 101286
00:03:51.419   03:54:19  -- common/autotest_common.sh@973 -- # kill 101286
00:03:51.419   03:54:19  -- common/autotest_common.sh@978 -- # wait 101286
00:03:53.323   03:54:21  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:03:53.323   03:54:21  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:03:53.323   03:54:21  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:03:53.323   03:54:21  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:03:53.323   03:54:21  -- spdk/autotest.sh@149 -- # timing_enter lib
00:03:53.323   03:54:21  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:53.323   03:54:21  -- common/autotest_common.sh@10 -- # set +x
00:03:53.323   03:54:21  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:03:53.323   03:54:21  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh
00:03:53.323   03:54:21  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:53.323   03:54:21  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:53.323   03:54:21  -- common/autotest_common.sh@10 -- # set +x
00:03:53.323  ************************************
00:03:53.323  START TEST env
00:03:53.323  ************************************
00:03:53.323   03:54:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh
00:03:53.323  * Looking for test storage...
00:03:53.323  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env
00:03:53.323    03:54:21 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:53.323     03:54:21 env -- common/autotest_common.sh@1711 -- # lcov --version
00:03:53.323     03:54:21 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:53.323    03:54:21 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:53.323    03:54:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:53.323    03:54:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:53.323    03:54:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:53.323    03:54:21 env -- scripts/common.sh@336 -- # IFS=.-:
00:03:53.323    03:54:21 env -- scripts/common.sh@336 -- # read -ra ver1
00:03:53.323    03:54:21 env -- scripts/common.sh@337 -- # IFS=.-:
00:03:53.323    03:54:21 env -- scripts/common.sh@337 -- # read -ra ver2
00:03:53.323    03:54:21 env -- scripts/common.sh@338 -- # local 'op=<'
00:03:53.323    03:54:21 env -- scripts/common.sh@340 -- # ver1_l=2
00:03:53.323    03:54:21 env -- scripts/common.sh@341 -- # ver2_l=1
00:03:53.323    03:54:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:53.323    03:54:21 env -- scripts/common.sh@344 -- # case "$op" in
00:03:53.323    03:54:21 env -- scripts/common.sh@345 -- # : 1
00:03:53.323    03:54:21 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:53.323    03:54:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:53.323     03:54:21 env -- scripts/common.sh@365 -- # decimal 1
00:03:53.323     03:54:21 env -- scripts/common.sh@353 -- # local d=1
00:03:53.323     03:54:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:53.323     03:54:21 env -- scripts/common.sh@355 -- # echo 1
00:03:53.323    03:54:21 env -- scripts/common.sh@365 -- # ver1[v]=1
00:03:53.323     03:54:21 env -- scripts/common.sh@366 -- # decimal 2
00:03:53.323     03:54:21 env -- scripts/common.sh@353 -- # local d=2
00:03:53.323     03:54:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:53.323     03:54:21 env -- scripts/common.sh@355 -- # echo 2
00:03:53.323    03:54:21 env -- scripts/common.sh@366 -- # ver2[v]=2
00:03:53.323    03:54:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:53.323    03:54:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:53.323    03:54:21 env -- scripts/common.sh@368 -- # return 0
00:03:53.323    03:54:21 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:53.323    03:54:21 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:53.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.323  		--rc genhtml_branch_coverage=1
00:03:53.323  		--rc genhtml_function_coverage=1
00:03:53.323  		--rc genhtml_legend=1
00:03:53.323  		--rc geninfo_all_blocks=1
00:03:53.323  		--rc geninfo_unexecuted_blocks=1
00:03:53.323  		
00:03:53.323  		'
00:03:53.323    03:54:21 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:53.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.323  		--rc genhtml_branch_coverage=1
00:03:53.323  		--rc genhtml_function_coverage=1
00:03:53.323  		--rc genhtml_legend=1
00:03:53.323  		--rc geninfo_all_blocks=1
00:03:53.323  		--rc geninfo_unexecuted_blocks=1
00:03:53.323  		
00:03:53.323  		'
00:03:53.323    03:54:21 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:53.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.323  		--rc genhtml_branch_coverage=1
00:03:53.323  		--rc genhtml_function_coverage=1
00:03:53.323  		--rc genhtml_legend=1
00:03:53.323  		--rc geninfo_all_blocks=1
00:03:53.323  		--rc geninfo_unexecuted_blocks=1
00:03:53.323  		
00:03:53.323  		'
00:03:53.323    03:54:21 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:53.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.323  		--rc genhtml_branch_coverage=1
00:03:53.323  		--rc genhtml_function_coverage=1
00:03:53.323  		--rc genhtml_legend=1
00:03:53.323  		--rc geninfo_all_blocks=1
00:03:53.323  		--rc geninfo_unexecuted_blocks=1
00:03:53.323  		
00:03:53.323  		'
00:03:53.323   03:54:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut
00:03:53.323   03:54:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:53.323   03:54:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:53.323   03:54:21 env -- common/autotest_common.sh@10 -- # set +x
00:03:53.323  ************************************
00:03:53.323  START TEST env_memory
00:03:53.323  ************************************
00:03:53.323   03:54:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut
00:03:53.323  
00:03:53.323  
00:03:53.323       CUnit - A unit testing framework for C - Version 2.1-3
00:03:53.323       http://cunit.sourceforge.net/
00:03:53.323  
00:03:53.323  
00:03:53.323  Suite: memory
00:03:53.324    Test: alloc and free memory map ...[2024-12-09 03:54:21.771214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:03:53.324  passed
00:03:53.324    Test: mem map translation ...[2024-12-09 03:54:21.791343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:03:53.324  [2024-12-09 03:54:21.791364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:03:53.324  [2024-12-09 03:54:21.791420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:03:53.324  [2024-12-09 03:54:21.791432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:03:53.324  passed
00:03:53.324    Test: mem map registration ...[2024-12-09 03:54:21.836531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:03:53.324  [2024-12-09 03:54:21.836562] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:03:53.324  passed
00:03:53.324    Test: mem map adjacent registrations ...passed
00:03:53.324  
00:03:53.324  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:53.324                suites      1      1    n/a      0        0
00:03:53.324                 tests      4      4      4      0        0
00:03:53.324               asserts    152    152    152      0      n/a
00:03:53.324  
00:03:53.324  Elapsed time =    0.146 seconds
00:03:53.324  
00:03:53.324  real	0m0.154s
00:03:53.324  user	0m0.145s
00:03:53.324  sys	0m0.008s
00:03:53.324   03:54:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:53.324   03:54:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:03:53.324  ************************************
00:03:53.324  END TEST env_memory
00:03:53.324  ************************************
00:03:53.582   03:54:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys
00:03:53.582   03:54:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:53.582   03:54:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:53.582   03:54:21 env -- common/autotest_common.sh@10 -- # set +x
00:03:53.582  ************************************
00:03:53.582  START TEST env_vtophys
00:03:53.582  ************************************
00:03:53.582   03:54:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys
00:03:53.582  EAL: lib.eal log level changed from notice to debug
00:03:53.582  EAL: Detected lcore 0 as core 0 on socket 0
00:03:53.582  EAL: Detected lcore 1 as core 1 on socket 0
00:03:53.582  EAL: Detected lcore 2 as core 2 on socket 0
00:03:53.583  EAL: Detected lcore 3 as core 3 on socket 0
00:03:53.583  EAL: Detected lcore 4 as core 4 on socket 0
00:03:53.583  EAL: Detected lcore 5 as core 5 on socket 0
00:03:53.583  EAL: Detected lcore 6 as core 8 on socket 0
00:03:53.583  EAL: Detected lcore 7 as core 9 on socket 0
00:03:53.583  EAL: Detected lcore 8 as core 10 on socket 0
00:03:53.583  EAL: Detected lcore 9 as core 11 on socket 0
00:03:53.583  EAL: Detected lcore 10 as core 12 on socket 0
00:03:53.583  EAL: Detected lcore 11 as core 13 on socket 0
00:03:53.583  EAL: Detected lcore 12 as core 0 on socket 1
00:03:53.583  EAL: Detected lcore 13 as core 1 on socket 1
00:03:53.583  EAL: Detected lcore 14 as core 2 on socket 1
00:03:53.583  EAL: Detected lcore 15 as core 3 on socket 1
00:03:53.583  EAL: Detected lcore 16 as core 4 on socket 1
00:03:53.583  EAL: Detected lcore 17 as core 5 on socket 1
00:03:53.583  EAL: Detected lcore 18 as core 8 on socket 1
00:03:53.583  EAL: Detected lcore 19 as core 9 on socket 1
00:03:53.583  EAL: Detected lcore 20 as core 10 on socket 1
00:03:53.583  EAL: Detected lcore 21 as core 11 on socket 1
00:03:53.583  EAL: Detected lcore 22 as core 12 on socket 1
00:03:53.583  EAL: Detected lcore 23 as core 13 on socket 1
00:03:53.583  EAL: Detected lcore 24 as core 0 on socket 0
00:03:53.583  EAL: Detected lcore 25 as core 1 on socket 0
00:03:53.583  EAL: Detected lcore 26 as core 2 on socket 0
00:03:53.583  EAL: Detected lcore 27 as core 3 on socket 0
00:03:53.583  EAL: Detected lcore 28 as core 4 on socket 0
00:03:53.583  EAL: Detected lcore 29 as core 5 on socket 0
00:03:53.583  EAL: Detected lcore 30 as core 8 on socket 0
00:03:53.583  EAL: Detected lcore 31 as core 9 on socket 0
00:03:53.583  EAL: Detected lcore 32 as core 10 on socket 0
00:03:53.583  EAL: Detected lcore 33 as core 11 on socket 0
00:03:53.583  EAL: Detected lcore 34 as core 12 on socket 0
00:03:53.583  EAL: Detected lcore 35 as core 13 on socket 0
00:03:53.583  EAL: Detected lcore 36 as core 0 on socket 1
00:03:53.583  EAL: Detected lcore 37 as core 1 on socket 1
00:03:53.583  EAL: Detected lcore 38 as core 2 on socket 1
00:03:53.583  EAL: Detected lcore 39 as core 3 on socket 1
00:03:53.583  EAL: Detected lcore 40 as core 4 on socket 1
00:03:53.583  EAL: Detected lcore 41 as core 5 on socket 1
00:03:53.583  EAL: Detected lcore 42 as core 8 on socket 1
00:03:53.583  EAL: Detected lcore 43 as core 9 on socket 1
00:03:53.583  EAL: Detected lcore 44 as core 10 on socket 1
00:03:53.583  EAL: Detected lcore 45 as core 11 on socket 1
00:03:53.583  EAL: Detected lcore 46 as core 12 on socket 1
00:03:53.583  EAL: Detected lcore 47 as core 13 on socket 1
00:03:53.583  EAL: Maximum logical cores by configuration: 128
00:03:53.583  EAL: Detected CPU lcores: 48
00:03:53.583  EAL: Detected NUMA nodes: 2
00:03:53.583  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:03:53.583  EAL: Detected shared linkage of DPDK
00:03:53.583  EAL: No shared files mode enabled, IPC will be disabled
00:03:53.583  EAL: Bus pci wants IOVA as 'DC'
00:03:53.583  EAL: Buses did not request a specific IOVA mode.
00:03:53.583  EAL: IOMMU is available, selecting IOVA as VA mode.
00:03:53.583  EAL: Selected IOVA mode 'VA'
00:03:53.583  EAL: Probing VFIO support...
00:03:53.583  EAL: IOMMU type 1 (Type 1) is supported
00:03:53.583  EAL: IOMMU type 7 (sPAPR) is not supported
00:03:53.583  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:03:53.583  EAL: VFIO support initialized
00:03:53.583  EAL: Ask a virtual area of 0x2e000 bytes
00:03:53.583  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:03:53.583  EAL: Setting up physically contiguous memory...
00:03:53.583  EAL: Setting maximum number of open files to 524288
00:03:53.583  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:03:53.583  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:03:53.583  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:03:53.583  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:03:53.583  EAL: Ask a virtual area of 0x61000 bytes
00:03:53.583  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:03:53.583  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:53.583  EAL: Ask a virtual area of 0x400000000 bytes
00:03:53.583  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:03:53.583  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:03:53.583  EAL: Hugepages will be freed exactly as allocated.
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: TSC frequency is ~2700000 KHz
00:03:53.583  EAL: Main lcore 0 is ready (tid=7f1fac364a00;cpuset=[0])
00:03:53.583  EAL: Trying to obtain current memory policy.
00:03:53.583  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.583  EAL: Restoring previous memory policy: 0
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was expanded by 2MB
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:03:53.583  EAL: Mem event callback 'spdk:(nil)' registered
00:03:53.583  
00:03:53.583  
00:03:53.583       CUnit - A unit testing framework for C - Version 2.1-3
00:03:53.583       http://cunit.sourceforge.net/
00:03:53.583  
00:03:53.583  
00:03:53.583  Suite: components_suite
00:03:53.583    Test: vtophys_malloc_test ...passed
00:03:53.583    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:03:53.583  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.583  EAL: Restoring previous memory policy: 4
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was expanded by 4MB
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was shrunk by 4MB
00:03:53.583  EAL: Trying to obtain current memory policy.
00:03:53.583  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.583  EAL: Restoring previous memory policy: 4
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was expanded by 6MB
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was shrunk by 6MB
00:03:53.583  EAL: Trying to obtain current memory policy.
00:03:53.583  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.583  EAL: Restoring previous memory policy: 4
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was expanded by 10MB
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was shrunk by 10MB
00:03:53.583  EAL: Trying to obtain current memory policy.
00:03:53.583  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.583  EAL: Restoring previous memory policy: 4
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was expanded by 18MB
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.583  EAL: request: mp_malloc_sync
00:03:53.583  EAL: No shared files mode enabled, IPC is disabled
00:03:53.583  EAL: Heap on socket 0 was shrunk by 18MB
00:03:53.583  EAL: Trying to obtain current memory policy.
00:03:53.583  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.583  EAL: Restoring previous memory policy: 4
00:03:53.583  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.584  EAL: request: mp_malloc_sync
00:03:53.584  EAL: No shared files mode enabled, IPC is disabled
00:03:53.584  EAL: Heap on socket 0 was expanded by 34MB
00:03:53.584  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.584  EAL: request: mp_malloc_sync
00:03:53.584  EAL: No shared files mode enabled, IPC is disabled
00:03:53.584  EAL: Heap on socket 0 was shrunk by 34MB
00:03:53.584  EAL: Trying to obtain current memory policy.
00:03:53.584  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.584  EAL: Restoring previous memory policy: 4
00:03:53.584  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.584  EAL: request: mp_malloc_sync
00:03:53.584  EAL: No shared files mode enabled, IPC is disabled
00:03:53.584  EAL: Heap on socket 0 was expanded by 66MB
00:03:53.584  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.584  EAL: request: mp_malloc_sync
00:03:53.584  EAL: No shared files mode enabled, IPC is disabled
00:03:53.584  EAL: Heap on socket 0 was shrunk by 66MB
00:03:53.584  EAL: Trying to obtain current memory policy.
00:03:53.584  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.584  EAL: Restoring previous memory policy: 4
00:03:53.584  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.584  EAL: request: mp_malloc_sync
00:03:53.584  EAL: No shared files mode enabled, IPC is disabled
00:03:53.584  EAL: Heap on socket 0 was expanded by 130MB
00:03:53.584  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.842  EAL: request: mp_malloc_sync
00:03:53.842  EAL: No shared files mode enabled, IPC is disabled
00:03:53.842  EAL: Heap on socket 0 was shrunk by 130MB
00:03:53.842  EAL: Trying to obtain current memory policy.
00:03:53.842  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:53.842  EAL: Restoring previous memory policy: 4
00:03:53.842  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.842  EAL: request: mp_malloc_sync
00:03:53.842  EAL: No shared files mode enabled, IPC is disabled
00:03:53.842  EAL: Heap on socket 0 was expanded by 258MB
00:03:53.842  EAL: Calling mem event callback 'spdk:(nil)'
00:03:53.842  EAL: request: mp_malloc_sync
00:03:53.842  EAL: No shared files mode enabled, IPC is disabled
00:03:53.842  EAL: Heap on socket 0 was shrunk by 258MB
00:03:53.842  EAL: Trying to obtain current memory policy.
00:03:53.842  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:54.100  EAL: Restoring previous memory policy: 4
00:03:54.100  EAL: Calling mem event callback 'spdk:(nil)'
00:03:54.100  EAL: request: mp_malloc_sync
00:03:54.100  EAL: No shared files mode enabled, IPC is disabled
00:03:54.100  EAL: Heap on socket 0 was expanded by 514MB
00:03:54.100  EAL: Calling mem event callback 'spdk:(nil)'
00:03:54.359  EAL: request: mp_malloc_sync
00:03:54.359  EAL: No shared files mode enabled, IPC is disabled
00:03:54.359  EAL: Heap on socket 0 was shrunk by 514MB
00:03:54.359  EAL: Trying to obtain current memory policy.
00:03:54.359  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:54.617  EAL: Restoring previous memory policy: 4
00:03:54.617  EAL: Calling mem event callback 'spdk:(nil)'
00:03:54.617  EAL: request: mp_malloc_sync
00:03:54.617  EAL: No shared files mode enabled, IPC is disabled
00:03:54.617  EAL: Heap on socket 0 was expanded by 1026MB
00:03:54.617  EAL: Calling mem event callback 'spdk:(nil)'
00:03:54.876  EAL: request: mp_malloc_sync
00:03:54.876  EAL: No shared files mode enabled, IPC is disabled
00:03:54.876  EAL: Heap on socket 0 was shrunk by 1026MB
00:03:54.876  passed
00:03:54.876  
00:03:54.876  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:54.876                suites      1      1    n/a      0        0
00:03:54.876                 tests      2      2      2      0        0
00:03:54.876               asserts    497    497    497      0      n/a
00:03:54.876  
00:03:54.876  Elapsed time =    1.339 seconds
00:03:54.876  EAL: Calling mem event callback 'spdk:(nil)'
00:03:54.876  EAL: request: mp_malloc_sync
00:03:54.876  EAL: No shared files mode enabled, IPC is disabled
00:03:54.876  EAL: Heap on socket 0 was shrunk by 2MB
00:03:54.876  EAL: No shared files mode enabled, IPC is disabled
00:03:54.876  EAL: No shared files mode enabled, IPC is disabled
00:03:54.876  EAL: No shared files mode enabled, IPC is disabled
00:03:54.876  
00:03:54.876  real	0m1.460s
00:03:54.876  user	0m0.866s
00:03:54.876  sys	0m0.557s
00:03:54.876   03:54:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:54.876   03:54:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:03:54.876  ************************************
00:03:54.876  END TEST env_vtophys
00:03:54.876  ************************************
00:03:54.876   03:54:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut
00:03:54.876   03:54:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:54.876   03:54:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:54.876   03:54:23 env -- common/autotest_common.sh@10 -- # set +x
00:03:54.876  ************************************
00:03:54.876  START TEST env_pci
00:03:54.876  ************************************
00:03:54.876   03:54:23 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut
00:03:55.136  
00:03:55.136  
00:03:55.136       CUnit - A unit testing framework for C - Version 2.1-3
00:03:55.136       http://cunit.sourceforge.net/
00:03:55.136  
00:03:55.136  
00:03:55.136  Suite: pci
00:03:55.136    Test: pci_hook ...[2024-12-09 03:54:23.458121] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 102183 has claimed it
00:03:55.136  EAL: Cannot find device (10000:00:01.0)
00:03:55.136  EAL: Failed to attach device on primary process
00:03:55.136  passed
00:03:55.136  
00:03:55.136  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:55.136                suites      1      1    n/a      0        0
00:03:55.136                 tests      1      1      1      0        0
00:03:55.136               asserts     25     25     25      0      n/a
00:03:55.136  
00:03:55.136  Elapsed time =    0.022 seconds
00:03:55.136  
00:03:55.136  real	0m0.035s
00:03:55.136  user	0m0.014s
00:03:55.136  sys	0m0.021s
00:03:55.136   03:54:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:55.136   03:54:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:03:55.136  ************************************
00:03:55.136  END TEST env_pci
00:03:55.136  ************************************
00:03:55.136   03:54:23 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:03:55.136    03:54:23 env -- env/env.sh@15 -- # uname
00:03:55.136   03:54:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:03:55.136   03:54:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:03:55.136   03:54:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:03:55.136   03:54:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:03:55.136   03:54:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:55.136   03:54:23 env -- common/autotest_common.sh@10 -- # set +x
00:03:55.136  ************************************
00:03:55.136  START TEST env_dpdk_post_init
00:03:55.136  ************************************
00:03:55.136   03:54:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:03:55.136  EAL: Detected CPU lcores: 48
00:03:55.136  EAL: Detected NUMA nodes: 2
00:03:55.136  EAL: Detected shared linkage of DPDK
00:03:55.136  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:03:55.136  EAL: Selected IOVA mode 'VA'
00:03:55.136  EAL: VFIO support initialized
00:03:55.136  TELEMETRY: No legacy callbacks, legacy socket not created
00:03:55.136  EAL: Using IOMMU type 1 (Type 1)
00:03:55.136  EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0)
00:03:55.136  EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0)
00:03:55.136  EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0)
00:03:55.136  EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0)
00:03:55.136  EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1)
00:03:55.397  EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1)
00:03:56.336  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1)
00:03:59.613  EAL: Releasing PCI mapped resource for 0000:88:00.0
00:03:59.613  EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000
00:03:59.613  Starting DPDK initialization...
00:03:59.613  Starting SPDK post initialization...
00:03:59.613  SPDK NVMe probe
00:03:59.613  Attaching to 0000:88:00.0
00:03:59.613  Attached to 0000:88:00.0
00:03:59.613  Cleaning up...
00:03:59.613  
00:03:59.613  real	0m4.387s
00:03:59.613  user	0m2.990s
00:03:59.613  sys	0m0.456s
00:03:59.613   03:54:27 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:59.613   03:54:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:03:59.613  ************************************
00:03:59.613  END TEST env_dpdk_post_init
00:03:59.613  ************************************
00:03:59.613    03:54:27 env -- env/env.sh@26 -- # uname
00:03:59.613   03:54:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:03:59.613   03:54:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:03:59.613   03:54:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:59.613   03:54:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:59.613   03:54:27 env -- common/autotest_common.sh@10 -- # set +x
00:03:59.613  ************************************
00:03:59.613  START TEST env_mem_callbacks
00:03:59.613  ************************************
00:03:59.613   03:54:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:03:59.613  EAL: Detected CPU lcores: 48
00:03:59.613  EAL: Detected NUMA nodes: 2
00:03:59.613  EAL: Detected shared linkage of DPDK
00:03:59.613  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:03:59.613  EAL: Selected IOVA mode 'VA'
00:03:59.613  EAL: VFIO support initialized
00:03:59.613  TELEMETRY: No legacy callbacks, legacy socket not created
00:03:59.613  
00:03:59.613  
00:03:59.613       CUnit - A unit testing framework for C - Version 2.1-3
00:03:59.613       http://cunit.sourceforge.net/
00:03:59.613  
00:03:59.613  
00:03:59.613  Suite: memory
00:03:59.613    Test: test ...
00:03:59.613  register 0x200000200000 2097152
00:03:59.613  malloc 3145728
00:03:59.613  register 0x200000400000 4194304
00:03:59.613  buf 0x200000500000 len 3145728 PASSED
00:03:59.613  malloc 64
00:03:59.613  buf 0x2000004fff40 len 64 PASSED
00:03:59.613  malloc 4194304
00:03:59.613  register 0x200000800000 6291456
00:03:59.613  buf 0x200000a00000 len 4194304 PASSED
00:03:59.613  free 0x200000500000 3145728
00:03:59.613  free 0x2000004fff40 64
00:03:59.613  unregister 0x200000400000 4194304 PASSED
00:03:59.613  free 0x200000a00000 4194304
00:03:59.613  unregister 0x200000800000 6291456 PASSED
00:03:59.613  malloc 8388608
00:03:59.613  register 0x200000400000 10485760
00:03:59.613  buf 0x200000600000 len 8388608 PASSED
00:03:59.613  free 0x200000600000 8388608
00:03:59.613  unregister 0x200000400000 10485760 PASSED
00:03:59.613  passed
00:03:59.613  
00:03:59.613  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:59.613                suites      1      1    n/a      0        0
00:03:59.613                 tests      1      1      1      0        0
00:03:59.613               asserts     15     15     15      0      n/a
00:03:59.613  
00:03:59.613  Elapsed time =    0.005 seconds
00:03:59.613  
00:03:59.613  real	0m0.050s
00:03:59.613  user	0m0.012s
00:03:59.613  sys	0m0.037s
00:03:59.613   03:54:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:59.613   03:54:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:03:59.613  ************************************
00:03:59.613  END TEST env_mem_callbacks
00:03:59.613  ************************************
00:03:59.613  
00:03:59.613  real	0m6.479s
00:03:59.613  user	0m4.230s
00:03:59.613  sys	0m1.292s
00:03:59.613   03:54:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:59.613   03:54:28 env -- common/autotest_common.sh@10 -- # set +x
00:03:59.613  ************************************
00:03:59.613  END TEST env
00:03:59.613  ************************************
00:03:59.613   03:54:28  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh
00:03:59.613   03:54:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:59.613   03:54:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:59.613   03:54:28  -- common/autotest_common.sh@10 -- # set +x
00:03:59.613  ************************************
00:03:59.613  START TEST rpc
00:03:59.613  ************************************
00:03:59.613   03:54:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh
00:03:59.613  * Looking for test storage...
00:03:59.613  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:03:59.613    03:54:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:59.613     03:54:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:03:59.613     03:54:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:59.872    03:54:28 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:59.872    03:54:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:59.872    03:54:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:59.872    03:54:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:59.872    03:54:28 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:03:59.872    03:54:28 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:03:59.872    03:54:28 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:03:59.872    03:54:28 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:03:59.872    03:54:28 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:03:59.872    03:54:28 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:03:59.872    03:54:28 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:03:59.872    03:54:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:59.872    03:54:28 rpc -- scripts/common.sh@344 -- # case "$op" in
00:03:59.872    03:54:28 rpc -- scripts/common.sh@345 -- # : 1
00:03:59.872    03:54:28 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:59.872    03:54:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:59.872     03:54:28 rpc -- scripts/common.sh@365 -- # decimal 1
00:03:59.872     03:54:28 rpc -- scripts/common.sh@353 -- # local d=1
00:03:59.872     03:54:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:59.872     03:54:28 rpc -- scripts/common.sh@355 -- # echo 1
00:03:59.872    03:54:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:03:59.872     03:54:28 rpc -- scripts/common.sh@366 -- # decimal 2
00:03:59.872     03:54:28 rpc -- scripts/common.sh@353 -- # local d=2
00:03:59.872     03:54:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:59.872     03:54:28 rpc -- scripts/common.sh@355 -- # echo 2
00:03:59.872    03:54:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:03:59.872    03:54:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:59.872    03:54:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:59.872    03:54:28 rpc -- scripts/common.sh@368 -- # return 0
00:03:59.872    03:54:28 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:59.872    03:54:28 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:59.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:59.872  		--rc genhtml_branch_coverage=1
00:03:59.872  		--rc genhtml_function_coverage=1
00:03:59.872  		--rc genhtml_legend=1
00:03:59.872  		--rc geninfo_all_blocks=1
00:03:59.872  		--rc geninfo_unexecuted_blocks=1
00:03:59.872  		
00:03:59.872  		'
00:03:59.872    03:54:28 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:59.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:59.872  		--rc genhtml_branch_coverage=1
00:03:59.872  		--rc genhtml_function_coverage=1
00:03:59.872  		--rc genhtml_legend=1
00:03:59.872  		--rc geninfo_all_blocks=1
00:03:59.872  		--rc geninfo_unexecuted_blocks=1
00:03:59.872  		
00:03:59.872  		'
00:03:59.872    03:54:28 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:59.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:59.872  		--rc genhtml_branch_coverage=1
00:03:59.872  		--rc genhtml_function_coverage=1
00:03:59.872  		--rc genhtml_legend=1
00:03:59.872  		--rc geninfo_all_blocks=1
00:03:59.872  		--rc geninfo_unexecuted_blocks=1
00:03:59.872  		
00:03:59.872  		'
00:03:59.872    03:54:28 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:59.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:59.872  		--rc genhtml_branch_coverage=1
00:03:59.872  		--rc genhtml_function_coverage=1
00:03:59.872  		--rc genhtml_legend=1
00:03:59.872  		--rc geninfo_all_blocks=1
00:03:59.872  		--rc geninfo_unexecuted_blocks=1
00:03:59.872  		
00:03:59.872  		'
00:03:59.872   03:54:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=102971
00:03:59.872   03:54:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:03:59.872   03:54:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:03:59.872   03:54:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 102971
00:03:59.872   03:54:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 102971 ']'
00:03:59.872   03:54:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:03:59.872   03:54:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:59.872   03:54:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:03:59.872  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:03:59.872   03:54:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:59.872   03:54:28 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:59.872  [2024-12-09 03:54:28.288865] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:03:59.872  [2024-12-09 03:54:28.288936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102971 ]
00:03:59.872  [2024-12-09 03:54:28.358079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:59.872  [2024-12-09 03:54:28.417881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:03:59.872  [2024-12-09 03:54:28.417952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 102971' to capture a snapshot of events at runtime.
00:03:59.872  [2024-12-09 03:54:28.417980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:03:59.872  [2024-12-09 03:54:28.417991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:03:59.873  [2024-12-09 03:54:28.418001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid102971 for offline analysis/debug.
00:03:59.873  [2024-12-09 03:54:28.418637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:00.131   03:54:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:00.131   03:54:28 rpc -- common/autotest_common.sh@868 -- # return 0
00:04:00.131   03:54:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:04:00.131   03:54:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:04:00.131   03:54:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:04:00.131   03:54:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:04:00.131   03:54:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:00.131   03:54:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:00.131   03:54:28 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:00.389  ************************************
00:04:00.389  START TEST rpc_integrity
00:04:00.389  ************************************
00:04:00.389   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:00.389    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.389   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:00.389    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:00.389   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:00.389    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.389   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:04:00.389    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.389    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.389   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:00.389  {
00:04:00.389  "name": "Malloc0",
00:04:00.389  "aliases": [
00:04:00.389  "5df7be83-e014-4f84-988d-ec3b20c432a4"
00:04:00.389  ],
00:04:00.389  "product_name": "Malloc disk",
00:04:00.389  "block_size": 512,
00:04:00.389  "num_blocks": 16384,
00:04:00.389  "uuid": "5df7be83-e014-4f84-988d-ec3b20c432a4",
00:04:00.390  "assigned_rate_limits": {
00:04:00.390  "rw_ios_per_sec": 0,
00:04:00.390  "rw_mbytes_per_sec": 0,
00:04:00.390  "r_mbytes_per_sec": 0,
00:04:00.390  "w_mbytes_per_sec": 0
00:04:00.390  },
00:04:00.390  "claimed": false,
00:04:00.390  "zoned": false,
00:04:00.390  "supported_io_types": {
00:04:00.390  "read": true,
00:04:00.390  "write": true,
00:04:00.390  "unmap": true,
00:04:00.390  "flush": true,
00:04:00.390  "reset": true,
00:04:00.390  "nvme_admin": false,
00:04:00.390  "nvme_io": false,
00:04:00.390  "nvme_io_md": false,
00:04:00.390  "write_zeroes": true,
00:04:00.390  "zcopy": true,
00:04:00.390  "get_zone_info": false,
00:04:00.390  "zone_management": false,
00:04:00.390  "zone_append": false,
00:04:00.390  "compare": false,
00:04:00.390  "compare_and_write": false,
00:04:00.390  "abort": true,
00:04:00.390  "seek_hole": false,
00:04:00.390  "seek_data": false,
00:04:00.390  "copy": true,
00:04:00.390  "nvme_iov_md": false
00:04:00.390  },
00:04:00.390  "memory_domains": [
00:04:00.390  {
00:04:00.390  "dma_device_id": "system",
00:04:00.390  "dma_device_type": 1
00:04:00.390  },
00:04:00.390  {
00:04:00.390  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:00.390  "dma_device_type": 2
00:04:00.390  }
00:04:00.390  ],
00:04:00.390  "driver_specific": {}
00:04:00.390  }
00:04:00.390  ]'
00:04:00.390    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.390  [2024-12-09 03:54:28.818704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:04:00.390  [2024-12-09 03:54:28.818760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:00.390  [2024-12-09 03:54:28.818784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc46020
00:04:00.390  [2024-12-09 03:54:28.818796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:00.390  [2024-12-09 03:54:28.820177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:00.390  [2024-12-09 03:54:28.820199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:00.390  Passthru0
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.390    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:00.390    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.390    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.390    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:00.390  {
00:04:00.390  "name": "Malloc0",
00:04:00.390  "aliases": [
00:04:00.390  "5df7be83-e014-4f84-988d-ec3b20c432a4"
00:04:00.390  ],
00:04:00.390  "product_name": "Malloc disk",
00:04:00.390  "block_size": 512,
00:04:00.390  "num_blocks": 16384,
00:04:00.390  "uuid": "5df7be83-e014-4f84-988d-ec3b20c432a4",
00:04:00.390  "assigned_rate_limits": {
00:04:00.390  "rw_ios_per_sec": 0,
00:04:00.390  "rw_mbytes_per_sec": 0,
00:04:00.390  "r_mbytes_per_sec": 0,
00:04:00.390  "w_mbytes_per_sec": 0
00:04:00.390  },
00:04:00.390  "claimed": true,
00:04:00.390  "claim_type": "exclusive_write",
00:04:00.390  "zoned": false,
00:04:00.390  "supported_io_types": {
00:04:00.390  "read": true,
00:04:00.390  "write": true,
00:04:00.390  "unmap": true,
00:04:00.390  "flush": true,
00:04:00.390  "reset": true,
00:04:00.390  "nvme_admin": false,
00:04:00.390  "nvme_io": false,
00:04:00.390  "nvme_io_md": false,
00:04:00.390  "write_zeroes": true,
00:04:00.390  "zcopy": true,
00:04:00.390  "get_zone_info": false,
00:04:00.390  "zone_management": false,
00:04:00.390  "zone_append": false,
00:04:00.390  "compare": false,
00:04:00.390  "compare_and_write": false,
00:04:00.390  "abort": true,
00:04:00.390  "seek_hole": false,
00:04:00.390  "seek_data": false,
00:04:00.390  "copy": true,
00:04:00.390  "nvme_iov_md": false
00:04:00.390  },
00:04:00.390  "memory_domains": [
00:04:00.390  {
00:04:00.390  "dma_device_id": "system",
00:04:00.390  "dma_device_type": 1
00:04:00.390  },
00:04:00.390  {
00:04:00.390  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:00.390  "dma_device_type": 2
00:04:00.390  }
00:04:00.390  ],
00:04:00.390  "driver_specific": {}
00:04:00.390  },
00:04:00.390  {
00:04:00.390  "name": "Passthru0",
00:04:00.390  "aliases": [
00:04:00.390  "0c0a20eb-0403-5ba1-a68a-c1983a390590"
00:04:00.390  ],
00:04:00.390  "product_name": "passthru",
00:04:00.390  "block_size": 512,
00:04:00.390  "num_blocks": 16384,
00:04:00.390  "uuid": "0c0a20eb-0403-5ba1-a68a-c1983a390590",
00:04:00.390  "assigned_rate_limits": {
00:04:00.390  "rw_ios_per_sec": 0,
00:04:00.390  "rw_mbytes_per_sec": 0,
00:04:00.390  "r_mbytes_per_sec": 0,
00:04:00.390  "w_mbytes_per_sec": 0
00:04:00.390  },
00:04:00.390  "claimed": false,
00:04:00.390  "zoned": false,
00:04:00.390  "supported_io_types": {
00:04:00.390  "read": true,
00:04:00.390  "write": true,
00:04:00.390  "unmap": true,
00:04:00.390  "flush": true,
00:04:00.390  "reset": true,
00:04:00.390  "nvme_admin": false,
00:04:00.390  "nvme_io": false,
00:04:00.390  "nvme_io_md": false,
00:04:00.390  "write_zeroes": true,
00:04:00.390  "zcopy": true,
00:04:00.390  "get_zone_info": false,
00:04:00.390  "zone_management": false,
00:04:00.390  "zone_append": false,
00:04:00.390  "compare": false,
00:04:00.390  "compare_and_write": false,
00:04:00.390  "abort": true,
00:04:00.390  "seek_hole": false,
00:04:00.390  "seek_data": false,
00:04:00.390  "copy": true,
00:04:00.390  "nvme_iov_md": false
00:04:00.390  },
00:04:00.390  "memory_domains": [
00:04:00.390  {
00:04:00.390  "dma_device_id": "system",
00:04:00.390  "dma_device_type": 1
00:04:00.390  },
00:04:00.390  {
00:04:00.390  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:00.390  "dma_device_type": 2
00:04:00.390  }
00:04:00.390  ],
00:04:00.390  "driver_specific": {
00:04:00.390  "passthru": {
00:04:00.390  "name": "Passthru0",
00:04:00.390  "base_bdev_name": "Malloc0"
00:04:00.390  }
00:04:00.390  }
00:04:00.390  }
00:04:00.390  ]'
00:04:00.390    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.390    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:00.390    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.390    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.390    03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:00.390    03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:00.390   03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:00.390  
00:04:00.390  real	0m0.222s
00:04:00.390  user	0m0.141s
00:04:00.390  sys	0m0.023s
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:00.390   03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.390  ************************************
00:04:00.390  END TEST rpc_integrity
00:04:00.390  ************************************
00:04:00.390   03:54:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:04:00.390   03:54:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:00.390   03:54:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:00.390   03:54:28 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:00.648  ************************************
00:04:00.648  START TEST rpc_plugins
00:04:00.648  ************************************
00:04:00.648   03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:04:00.648    03:54:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:04:00.648    03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.648    03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:00.648    03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.648   03:54:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:04:00.648    03:54:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:04:00.648    03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.648    03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:00.648    03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.648   03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:04:00.648  {
00:04:00.648  "name": "Malloc1",
00:04:00.648  "aliases": [
00:04:00.648  "9f7c1785-8407-4a23-a1e5-7eef1b178446"
00:04:00.648  ],
00:04:00.648  "product_name": "Malloc disk",
00:04:00.648  "block_size": 4096,
00:04:00.648  "num_blocks": 256,
00:04:00.648  "uuid": "9f7c1785-8407-4a23-a1e5-7eef1b178446",
00:04:00.648  "assigned_rate_limits": {
00:04:00.648  "rw_ios_per_sec": 0,
00:04:00.648  "rw_mbytes_per_sec": 0,
00:04:00.648  "r_mbytes_per_sec": 0,
00:04:00.648  "w_mbytes_per_sec": 0
00:04:00.648  },
00:04:00.648  "claimed": false,
00:04:00.648  "zoned": false,
00:04:00.648  "supported_io_types": {
00:04:00.648  "read": true,
00:04:00.648  "write": true,
00:04:00.648  "unmap": true,
00:04:00.648  "flush": true,
00:04:00.648  "reset": true,
00:04:00.648  "nvme_admin": false,
00:04:00.648  "nvme_io": false,
00:04:00.648  "nvme_io_md": false,
00:04:00.648  "write_zeroes": true,
00:04:00.648  "zcopy": true,
00:04:00.648  "get_zone_info": false,
00:04:00.648  "zone_management": false,
00:04:00.648  "zone_append": false,
00:04:00.648  "compare": false,
00:04:00.648  "compare_and_write": false,
00:04:00.648  "abort": true,
00:04:00.648  "seek_hole": false,
00:04:00.648  "seek_data": false,
00:04:00.648  "copy": true,
00:04:00.648  "nvme_iov_md": false
00:04:00.648  },
00:04:00.648  "memory_domains": [
00:04:00.648  {
00:04:00.648  "dma_device_id": "system",
00:04:00.648  "dma_device_type": 1
00:04:00.648  },
00:04:00.648  {
00:04:00.648  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:00.648  "dma_device_type": 2
00:04:00.648  }
00:04:00.648  ],
00:04:00.648  "driver_specific": {}
00:04:00.648  }
00:04:00.648  ]'
00:04:00.648    03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:04:00.648   03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:04:00.648   03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:04:00.648   03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.648   03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:00.648   03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.648    03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:04:00.648    03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.648    03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:00.648    03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.648   03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:04:00.648    03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:04:00.648   03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:04:00.648  
00:04:00.648  real	0m0.104s
00:04:00.648  user	0m0.068s
00:04:00.648  sys	0m0.009s
00:04:00.648   03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:00.648   03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:00.648  ************************************
00:04:00.648  END TEST rpc_plugins
00:04:00.648  ************************************
00:04:00.648   03:54:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:04:00.648   03:54:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:00.648   03:54:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:00.648   03:54:29 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:00.648  ************************************
00:04:00.648  START TEST rpc_trace_cmd_test
00:04:00.648  ************************************
00:04:00.648   03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:04:00.648   03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:04:00.648    03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:04:00.648    03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.648    03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:00.648    03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.648   03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:04:00.648  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid102971",
00:04:00.648  "tpoint_group_mask": "0x8",
00:04:00.648  "iscsi_conn": {
00:04:00.649  "mask": "0x2",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "scsi": {
00:04:00.649  "mask": "0x4",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "bdev": {
00:04:00.649  "mask": "0x8",
00:04:00.649  "tpoint_mask": "0xffffffffffffffff"
00:04:00.649  },
00:04:00.649  "nvmf_rdma": {
00:04:00.649  "mask": "0x10",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "nvmf_tcp": {
00:04:00.649  "mask": "0x20",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "ftl": {
00:04:00.649  "mask": "0x40",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "blobfs": {
00:04:00.649  "mask": "0x80",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "dsa": {
00:04:00.649  "mask": "0x200",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "thread": {
00:04:00.649  "mask": "0x400",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "nvme_pcie": {
00:04:00.649  "mask": "0x800",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "iaa": {
00:04:00.649  "mask": "0x1000",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "nvme_tcp": {
00:04:00.649  "mask": "0x2000",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "bdev_nvme": {
00:04:00.649  "mask": "0x4000",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "sock": {
00:04:00.649  "mask": "0x8000",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "blob": {
00:04:00.649  "mask": "0x10000",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "bdev_raid": {
00:04:00.649  "mask": "0x20000",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  },
00:04:00.649  "scheduler": {
00:04:00.649  "mask": "0x40000",
00:04:00.649  "tpoint_mask": "0x0"
00:04:00.649  }
00:04:00.649  }'
00:04:00.649    03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:04:00.649   03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:04:00.649    03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:04:00.649   03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:04:00.649    03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:04:00.907   03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:04:00.907    03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:04:00.907   03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:04:00.907    03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:04:00.907   03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:04:00.907  
00:04:00.907  real	0m0.186s
00:04:00.907  user	0m0.162s
00:04:00.907  sys	0m0.015s
00:04:00.907   03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:00.907   03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:00.907  ************************************
00:04:00.907  END TEST rpc_trace_cmd_test
00:04:00.907  ************************************
00:04:00.907   03:54:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:04:00.907   03:54:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:04:00.907   03:54:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:04:00.907   03:54:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:00.907   03:54:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:00.907   03:54:29 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:00.907  ************************************
00:04:00.907  START TEST rpc_daemon_integrity
00:04:00.907  ************************************
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:00.907  {
00:04:00.907  "name": "Malloc2",
00:04:00.907  "aliases": [
00:04:00.907  "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4"
00:04:00.907  ],
00:04:00.907  "product_name": "Malloc disk",
00:04:00.907  "block_size": 512,
00:04:00.907  "num_blocks": 16384,
00:04:00.907  "uuid": "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4",
00:04:00.907  "assigned_rate_limits": {
00:04:00.907  "rw_ios_per_sec": 0,
00:04:00.907  "rw_mbytes_per_sec": 0,
00:04:00.907  "r_mbytes_per_sec": 0,
00:04:00.907  "w_mbytes_per_sec": 0
00:04:00.907  },
00:04:00.907  "claimed": false,
00:04:00.907  "zoned": false,
00:04:00.907  "supported_io_types": {
00:04:00.907  "read": true,
00:04:00.907  "write": true,
00:04:00.907  "unmap": true,
00:04:00.907  "flush": true,
00:04:00.907  "reset": true,
00:04:00.907  "nvme_admin": false,
00:04:00.907  "nvme_io": false,
00:04:00.907  "nvme_io_md": false,
00:04:00.907  "write_zeroes": true,
00:04:00.907  "zcopy": true,
00:04:00.907  "get_zone_info": false,
00:04:00.907  "zone_management": false,
00:04:00.907  "zone_append": false,
00:04:00.907  "compare": false,
00:04:00.907  "compare_and_write": false,
00:04:00.907  "abort": true,
00:04:00.907  "seek_hole": false,
00:04:00.907  "seek_data": false,
00:04:00.907  "copy": true,
00:04:00.907  "nvme_iov_md": false
00:04:00.907  },
00:04:00.907  "memory_domains": [
00:04:00.907  {
00:04:00.907  "dma_device_id": "system",
00:04:00.907  "dma_device_type": 1
00:04:00.907  },
00:04:00.907  {
00:04:00.907  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:00.907  "dma_device_type": 2
00:04:00.907  }
00:04:00.907  ],
00:04:00.907  "driver_specific": {}
00:04:00.907  }
00:04:00.907  ]'
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:00.907  [2024-12-09 03:54:29.473015] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:04:00.907  [2024-12-09 03:54:29.473071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:00.907  [2024-12-09 03:54:29.473095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb95320
00:04:00.907  [2024-12-09 03:54:29.473114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:00.907  [2024-12-09 03:54:29.474364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:00.907  [2024-12-09 03:54:29.474390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:00.907  Passthru0
00:04:00.907   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:00.907    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:01.165    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:01.165  {
00:04:01.165  "name": "Malloc2",
00:04:01.165  "aliases": [
00:04:01.165  "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4"
00:04:01.165  ],
00:04:01.165  "product_name": "Malloc disk",
00:04:01.165  "block_size": 512,
00:04:01.165  "num_blocks": 16384,
00:04:01.165  "uuid": "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4",
00:04:01.165  "assigned_rate_limits": {
00:04:01.165  "rw_ios_per_sec": 0,
00:04:01.165  "rw_mbytes_per_sec": 0,
00:04:01.165  "r_mbytes_per_sec": 0,
00:04:01.165  "w_mbytes_per_sec": 0
00:04:01.165  },
00:04:01.165  "claimed": true,
00:04:01.165  "claim_type": "exclusive_write",
00:04:01.165  "zoned": false,
00:04:01.165  "supported_io_types": {
00:04:01.165  "read": true,
00:04:01.165  "write": true,
00:04:01.165  "unmap": true,
00:04:01.165  "flush": true,
00:04:01.165  "reset": true,
00:04:01.165  "nvme_admin": false,
00:04:01.165  "nvme_io": false,
00:04:01.165  "nvme_io_md": false,
00:04:01.165  "write_zeroes": true,
00:04:01.165  "zcopy": true,
00:04:01.165  "get_zone_info": false,
00:04:01.165  "zone_management": false,
00:04:01.165  "zone_append": false,
00:04:01.165  "compare": false,
00:04:01.165  "compare_and_write": false,
00:04:01.165  "abort": true,
00:04:01.165  "seek_hole": false,
00:04:01.165  "seek_data": false,
00:04:01.165  "copy": true,
00:04:01.165  "nvme_iov_md": false
00:04:01.165  },
00:04:01.165  "memory_domains": [
00:04:01.165  {
00:04:01.165  "dma_device_id": "system",
00:04:01.165  "dma_device_type": 1
00:04:01.165  },
00:04:01.165  {
00:04:01.165  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:01.165  "dma_device_type": 2
00:04:01.165  }
00:04:01.165  ],
00:04:01.165  "driver_specific": {}
00:04:01.165  },
00:04:01.165  {
00:04:01.165  "name": "Passthru0",
00:04:01.165  "aliases": [
00:04:01.165  "07bb2221-df62-5861-aa18-eb173f687cd2"
00:04:01.165  ],
00:04:01.165  "product_name": "passthru",
00:04:01.165  "block_size": 512,
00:04:01.165  "num_blocks": 16384,
00:04:01.165  "uuid": "07bb2221-df62-5861-aa18-eb173f687cd2",
00:04:01.165  "assigned_rate_limits": {
00:04:01.165  "rw_ios_per_sec": 0,
00:04:01.165  "rw_mbytes_per_sec": 0,
00:04:01.165  "r_mbytes_per_sec": 0,
00:04:01.165  "w_mbytes_per_sec": 0
00:04:01.165  },
00:04:01.165  "claimed": false,
00:04:01.165  "zoned": false,
00:04:01.165  "supported_io_types": {
00:04:01.165  "read": true,
00:04:01.165  "write": true,
00:04:01.165  "unmap": true,
00:04:01.165  "flush": true,
00:04:01.165  "reset": true,
00:04:01.165  "nvme_admin": false,
00:04:01.165  "nvme_io": false,
00:04:01.165  "nvme_io_md": false,
00:04:01.165  "write_zeroes": true,
00:04:01.165  "zcopy": true,
00:04:01.165  "get_zone_info": false,
00:04:01.165  "zone_management": false,
00:04:01.165  "zone_append": false,
00:04:01.165  "compare": false,
00:04:01.165  "compare_and_write": false,
00:04:01.165  "abort": true,
00:04:01.165  "seek_hole": false,
00:04:01.165  "seek_data": false,
00:04:01.165  "copy": true,
00:04:01.165  "nvme_iov_md": false
00:04:01.165  },
00:04:01.165  "memory_domains": [
00:04:01.165  {
00:04:01.165  "dma_device_id": "system",
00:04:01.165  "dma_device_type": 1
00:04:01.165  },
00:04:01.165  {
00:04:01.165  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:01.165  "dma_device_type": 2
00:04:01.165  }
00:04:01.165  ],
00:04:01.165  "driver_specific": {
00:04:01.165  "passthru": {
00:04:01.165  "name": "Passthru0",
00:04:01.165  "base_bdev_name": "Malloc2"
00:04:01.165  }
00:04:01.165  }
00:04:01.165  }
00:04:01.165  ]'
00:04:01.165    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:01.165    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:01.165    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:01.165    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:01.165    03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:01.165    03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:01.165  
00:04:01.165  real	0m0.213s
00:04:01.165  user	0m0.143s
00:04:01.165  sys	0m0.015s
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:01.165   03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:01.165  ************************************
00:04:01.165  END TEST rpc_daemon_integrity
00:04:01.165  ************************************
00:04:01.165   03:54:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:04:01.165   03:54:29 rpc -- rpc/rpc.sh@84 -- # killprocess 102971
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 102971 ']'
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@958 -- # kill -0 102971
00:04:01.165    03:54:29 rpc -- common/autotest_common.sh@959 -- # uname
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:01.165    03:54:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102971
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102971'
00:04:01.165  killing process with pid 102971
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@973 -- # kill 102971
00:04:01.165   03:54:29 rpc -- common/autotest_common.sh@978 -- # wait 102971
00:04:01.731  
00:04:01.731  real	0m1.981s
00:04:01.731  user	0m2.467s
00:04:01.731  sys	0m0.587s
00:04:01.731   03:54:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:01.731   03:54:30 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:01.731  ************************************
00:04:01.731  END TEST rpc
00:04:01.731  ************************************
00:04:01.731   03:54:30  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:04:01.731   03:54:30  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:01.732   03:54:30  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:01.732   03:54:30  -- common/autotest_common.sh@10 -- # set +x
00:04:01.732  ************************************
00:04:01.732  START TEST skip_rpc
00:04:01.732  ************************************
00:04:01.732   03:54:30 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:04:01.732  * Looking for test storage...
00:04:01.732  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:04:01.732    03:54:30 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:01.732     03:54:30 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:01.732     03:54:30 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:01.732    03:54:30 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@345 -- # : 1
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:01.732     03:54:30 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:01.732    03:54:30 skip_rpc -- scripts/common.sh@368 -- # return 0
00:04:01.732    03:54:30 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:01.732    03:54:30 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:01.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.732  		--rc genhtml_branch_coverage=1
00:04:01.732  		--rc genhtml_function_coverage=1
00:04:01.732  		--rc genhtml_legend=1
00:04:01.732  		--rc geninfo_all_blocks=1
00:04:01.732  		--rc geninfo_unexecuted_blocks=1
00:04:01.732  		
00:04:01.732  		'
00:04:01.732    03:54:30 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:01.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.732  		--rc genhtml_branch_coverage=1
00:04:01.732  		--rc genhtml_function_coverage=1
00:04:01.732  		--rc genhtml_legend=1
00:04:01.732  		--rc geninfo_all_blocks=1
00:04:01.732  		--rc geninfo_unexecuted_blocks=1
00:04:01.732  		
00:04:01.732  		'
00:04:01.732    03:54:30 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:01.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.732  		--rc genhtml_branch_coverage=1
00:04:01.732  		--rc genhtml_function_coverage=1
00:04:01.732  		--rc genhtml_legend=1
00:04:01.732  		--rc geninfo_all_blocks=1
00:04:01.732  		--rc geninfo_unexecuted_blocks=1
00:04:01.732  		
00:04:01.732  		'
00:04:01.732    03:54:30 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:01.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.732  		--rc genhtml_branch_coverage=1
00:04:01.732  		--rc genhtml_function_coverage=1
00:04:01.732  		--rc genhtml_legend=1
00:04:01.732  		--rc geninfo_all_blocks=1
00:04:01.732  		--rc geninfo_unexecuted_blocks=1
00:04:01.732  		
00:04:01.732  		'
00:04:01.732   03:54:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:04:01.732   03:54:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt
00:04:01.732   03:54:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:04:01.732   03:54:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:01.732   03:54:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:01.732   03:54:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:01.732  ************************************
00:04:01.732  START TEST skip_rpc
00:04:01.732  ************************************
00:04:01.732   03:54:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:04:01.732   03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=103309
00:04:01.732   03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:04:01.732   03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:01.732   03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:04:01.990  [2024-12-09 03:54:30.360881] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:01.990  [2024-12-09 03:54:30.360973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103309 ]
00:04:01.990  [2024-12-09 03:54:30.430697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:01.990  [2024-12-09 03:54:30.489519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:07.245   03:54:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:07.246    03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 103309
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 103309 ']'
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 103309
00:04:07.246    03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:07.246    03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103309
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103309'
00:04:07.246  killing process with pid 103309
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 103309
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 103309
00:04:07.246  
00:04:07.246  real	0m5.467s
00:04:07.246  user	0m5.148s
00:04:07.246  sys	0m0.340s
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:07.246   03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:07.246  ************************************
00:04:07.246  END TEST skip_rpc
00:04:07.246  ************************************
00:04:07.246   03:54:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:04:07.246   03:54:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:07.246   03:54:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:07.246   03:54:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:07.246  ************************************
00:04:07.246  START TEST skip_rpc_with_json
00:04:07.246  ************************************
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=103984
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 103984
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 103984 ']'
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:07.246  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:07.246   03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:07.503  [2024-12-09 03:54:35.874370] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:07.503  [2024-12-09 03:54:35.874478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103984 ]
00:04:07.503  [2024-12-09 03:54:35.941980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:07.503  [2024-12-09 03:54:36.000481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:07.761  [2024-12-09 03:54:36.271509] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:04:07.761  request:
00:04:07.761  {
00:04:07.761  "trtype": "tcp",
00:04:07.761  "method": "nvmf_get_transports",
00:04:07.761  "req_id": 1
00:04:07.761  }
00:04:07.761  Got JSON-RPC error response
00:04:07.761  response:
00:04:07.761  {
00:04:07.761  "code": -19,
00:04:07.761  "message": "No such device"
00:04:07.761  }
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:07.761  [2024-12-09 03:54:36.279655] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:07.761   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:08.019   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:08.019   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:04:08.019  {
00:04:08.019  "subsystems": [
00:04:08.019  {
00:04:08.019  "subsystem": "fsdev",
00:04:08.019  "config": [
00:04:08.019  {
00:04:08.019  "method": "fsdev_set_opts",
00:04:08.019  "params": {
00:04:08.019  "fsdev_io_pool_size": 65535,
00:04:08.019  "fsdev_io_cache_size": 256
00:04:08.019  }
00:04:08.019  }
00:04:08.019  ]
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "vfio_user_target",
00:04:08.019  "config": null
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "keyring",
00:04:08.019  "config": []
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "iobuf",
00:04:08.019  "config": [
00:04:08.019  {
00:04:08.019  "method": "iobuf_set_options",
00:04:08.019  "params": {
00:04:08.019  "small_pool_count": 8192,
00:04:08.019  "large_pool_count": 1024,
00:04:08.019  "small_bufsize": 8192,
00:04:08.019  "large_bufsize": 135168,
00:04:08.019  "enable_numa": false
00:04:08.019  }
00:04:08.019  }
00:04:08.019  ]
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "sock",
00:04:08.019  "config": [
00:04:08.019  {
00:04:08.019  "method": "sock_set_default_impl",
00:04:08.019  "params": {
00:04:08.019  "impl_name": "posix"
00:04:08.019  }
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "method": "sock_impl_set_options",
00:04:08.019  "params": {
00:04:08.019  "impl_name": "ssl",
00:04:08.019  "recv_buf_size": 4096,
00:04:08.019  "send_buf_size": 4096,
00:04:08.019  "enable_recv_pipe": true,
00:04:08.019  "enable_quickack": false,
00:04:08.019  "enable_placement_id": 0,
00:04:08.019  "enable_zerocopy_send_server": true,
00:04:08.019  "enable_zerocopy_send_client": false,
00:04:08.019  "zerocopy_threshold": 0,
00:04:08.019  "tls_version": 0,
00:04:08.019  "enable_ktls": false
00:04:08.019  }
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "method": "sock_impl_set_options",
00:04:08.019  "params": {
00:04:08.019  "impl_name": "posix",
00:04:08.019  "recv_buf_size": 2097152,
00:04:08.019  "send_buf_size": 2097152,
00:04:08.019  "enable_recv_pipe": true,
00:04:08.019  "enable_quickack": false,
00:04:08.019  "enable_placement_id": 0,
00:04:08.019  "enable_zerocopy_send_server": true,
00:04:08.019  "enable_zerocopy_send_client": false,
00:04:08.019  "zerocopy_threshold": 0,
00:04:08.019  "tls_version": 0,
00:04:08.019  "enable_ktls": false
00:04:08.019  }
00:04:08.019  }
00:04:08.019  ]
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "vmd",
00:04:08.019  "config": []
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "accel",
00:04:08.019  "config": [
00:04:08.019  {
00:04:08.019  "method": "accel_set_options",
00:04:08.019  "params": {
00:04:08.019  "small_cache_size": 128,
00:04:08.019  "large_cache_size": 16,
00:04:08.019  "task_count": 2048,
00:04:08.019  "sequence_count": 2048,
00:04:08.019  "buf_count": 2048
00:04:08.019  }
00:04:08.019  }
00:04:08.019  ]
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "bdev",
00:04:08.019  "config": [
00:04:08.019  {
00:04:08.019  "method": "bdev_set_options",
00:04:08.019  "params": {
00:04:08.019  "bdev_io_pool_size": 65535,
00:04:08.019  "bdev_io_cache_size": 256,
00:04:08.019  "bdev_auto_examine": true,
00:04:08.019  "iobuf_small_cache_size": 128,
00:04:08.019  "iobuf_large_cache_size": 16
00:04:08.019  }
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "method": "bdev_raid_set_options",
00:04:08.019  "params": {
00:04:08.019  "process_window_size_kb": 1024,
00:04:08.019  "process_max_bandwidth_mb_sec": 0
00:04:08.019  }
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "method": "bdev_iscsi_set_options",
00:04:08.019  "params": {
00:04:08.019  "timeout_sec": 30
00:04:08.019  }
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "method": "bdev_nvme_set_options",
00:04:08.019  "params": {
00:04:08.019  "action_on_timeout": "none",
00:04:08.019  "timeout_us": 0,
00:04:08.019  "timeout_admin_us": 0,
00:04:08.019  "keep_alive_timeout_ms": 10000,
00:04:08.019  "arbitration_burst": 0,
00:04:08.019  "low_priority_weight": 0,
00:04:08.019  "medium_priority_weight": 0,
00:04:08.019  "high_priority_weight": 0,
00:04:08.019  "nvme_adminq_poll_period_us": 10000,
00:04:08.019  "nvme_ioq_poll_period_us": 0,
00:04:08.019  "io_queue_requests": 0,
00:04:08.019  "delay_cmd_submit": true,
00:04:08.019  "transport_retry_count": 4,
00:04:08.019  "bdev_retry_count": 3,
00:04:08.019  "transport_ack_timeout": 0,
00:04:08.019  "ctrlr_loss_timeout_sec": 0,
00:04:08.019  "reconnect_delay_sec": 0,
00:04:08.019  "fast_io_fail_timeout_sec": 0,
00:04:08.019  "disable_auto_failback": false,
00:04:08.019  "generate_uuids": false,
00:04:08.019  "transport_tos": 0,
00:04:08.019  "nvme_error_stat": false,
00:04:08.019  "rdma_srq_size": 0,
00:04:08.019  "io_path_stat": false,
00:04:08.019  "allow_accel_sequence": false,
00:04:08.019  "rdma_max_cq_size": 0,
00:04:08.019  "rdma_cm_event_timeout_ms": 0,
00:04:08.019  "dhchap_digests": [
00:04:08.019  "sha256",
00:04:08.019  "sha384",
00:04:08.019  "sha512"
00:04:08.019  ],
00:04:08.019  "dhchap_dhgroups": [
00:04:08.019  "null",
00:04:08.019  "ffdhe2048",
00:04:08.019  "ffdhe3072",
00:04:08.019  "ffdhe4096",
00:04:08.019  "ffdhe6144",
00:04:08.019  "ffdhe8192"
00:04:08.019  ]
00:04:08.019  }
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "method": "bdev_nvme_set_hotplug",
00:04:08.019  "params": {
00:04:08.019  "period_us": 100000,
00:04:08.019  "enable": false
00:04:08.019  }
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "method": "bdev_wait_for_examine"
00:04:08.019  }
00:04:08.019  ]
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "scsi",
00:04:08.019  "config": null
00:04:08.019  },
00:04:08.019  {
00:04:08.019  "subsystem": "scheduler",
00:04:08.019  "config": [
00:04:08.019  {
00:04:08.019  "method": "framework_set_scheduler",
00:04:08.019  "params": {
00:04:08.019  "name": "static"
00:04:08.019  }
00:04:08.019  }
00:04:08.019  ]
00:04:08.019  },
00:04:08.020  {
00:04:08.020  "subsystem": "vhost_scsi",
00:04:08.020  "config": []
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "subsystem": "vhost_blk",
00:04:08.020  "config": []
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "subsystem": "ublk",
00:04:08.020  "config": []
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "subsystem": "nbd",
00:04:08.020  "config": []
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "subsystem": "nvmf",
00:04:08.020  "config": [
00:04:08.020  {
00:04:08.020  "method": "nvmf_set_config",
00:04:08.020  "params": {
00:04:08.020  "discovery_filter": "match_any",
00:04:08.020  "admin_cmd_passthru": {
00:04:08.020  "identify_ctrlr": false
00:04:08.020  },
00:04:08.020  "dhchap_digests": [
00:04:08.020  "sha256",
00:04:08.020  "sha384",
00:04:08.020  "sha512"
00:04:08.020  ],
00:04:08.020  "dhchap_dhgroups": [
00:04:08.020  "null",
00:04:08.020  "ffdhe2048",
00:04:08.020  "ffdhe3072",
00:04:08.020  "ffdhe4096",
00:04:08.020  "ffdhe6144",
00:04:08.020  "ffdhe8192"
00:04:08.020  ]
00:04:08.020  }
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "method": "nvmf_set_max_subsystems",
00:04:08.020  "params": {
00:04:08.020  "max_subsystems": 1024
00:04:08.020  }
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "method": "nvmf_set_crdt",
00:04:08.020  "params": {
00:04:08.020  "crdt1": 0,
00:04:08.020  "crdt2": 0,
00:04:08.020  "crdt3": 0
00:04:08.020  }
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "method": "nvmf_create_transport",
00:04:08.020  "params": {
00:04:08.020  "trtype": "TCP",
00:04:08.020  "max_queue_depth": 128,
00:04:08.020  "max_io_qpairs_per_ctrlr": 127,
00:04:08.020  "in_capsule_data_size": 4096,
00:04:08.020  "max_io_size": 131072,
00:04:08.020  "io_unit_size": 131072,
00:04:08.020  "max_aq_depth": 128,
00:04:08.020  "num_shared_buffers": 511,
00:04:08.020  "buf_cache_size": 4294967295,
00:04:08.020  "dif_insert_or_strip": false,
00:04:08.020  "zcopy": false,
00:04:08.020  "c2h_success": true,
00:04:08.020  "sock_priority": 0,
00:04:08.020  "abort_timeout_sec": 1,
00:04:08.020  "ack_timeout": 0,
00:04:08.020  "data_wr_pool_size": 0
00:04:08.020  }
00:04:08.020  }
00:04:08.020  ]
00:04:08.020  },
00:04:08.020  {
00:04:08.020  "subsystem": "iscsi",
00:04:08.020  "config": [
00:04:08.020  {
00:04:08.020  "method": "iscsi_set_options",
00:04:08.020  "params": {
00:04:08.020  "node_base": "iqn.2016-06.io.spdk",
00:04:08.020  "max_sessions": 128,
00:04:08.020  "max_connections_per_session": 2,
00:04:08.020  "max_queue_depth": 64,
00:04:08.020  "default_time2wait": 2,
00:04:08.020  "default_time2retain": 20,
00:04:08.020  "first_burst_length": 8192,
00:04:08.020  "immediate_data": true,
00:04:08.020  "allow_duplicated_isid": false,
00:04:08.020  "error_recovery_level": 0,
00:04:08.020  "nop_timeout": 60,
00:04:08.020  "nop_in_interval": 30,
00:04:08.020  "disable_chap": false,
00:04:08.020  "require_chap": false,
00:04:08.020  "mutual_chap": false,
00:04:08.020  "chap_group": 0,
00:04:08.020  "max_large_datain_per_connection": 64,
00:04:08.020  "max_r2t_per_connection": 4,
00:04:08.020  "pdu_pool_size": 36864,
00:04:08.020  "immediate_data_pool_size": 16384,
00:04:08.020  "data_out_pool_size": 2048
00:04:08.020  }
00:04:08.020  }
00:04:08.020  ]
00:04:08.020  }
00:04:08.020  ]
00:04:08.020  }
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 103984
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 103984 ']'
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 103984
00:04:08.020    03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:08.020    03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103984
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103984'
00:04:08.020  killing process with pid 103984
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 103984
00:04:08.020   03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 103984
00:04:08.587   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=104124
00:04:08.587   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:04:08.587   03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 104124
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 104124 ']'
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 104124
00:04:13.844    03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:13.844    03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104124
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104124'
00:04:13.844  killing process with pid 104124
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 104124
00:04:13.844   03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 104124
00:04:13.844   03:54:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt
00:04:13.844   03:54:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt
00:04:13.844  
00:04:13.844  real	0m6.538s
00:04:13.844  user	0m6.187s
00:04:13.844  sys	0m0.670s
00:04:13.844   03:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:13.844   03:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:13.844  ************************************
00:04:13.844  END TEST skip_rpc_with_json
00:04:13.844  ************************************
00:04:13.844   03:54:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:04:13.844   03:54:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:13.844   03:54:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:13.844   03:54:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:13.844  ************************************
00:04:13.844  START TEST skip_rpc_with_delay
00:04:13.844  ************************************
00:04:13.844   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:04:13.844   03:54:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:13.844   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:13.845    03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:13.845    03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:04:13.845   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:14.102  [2024-12-09 03:54:42.460588] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:04:14.102   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:04:14.102   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:14.102   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:14.102   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:14.102  
00:04:14.102  real	0m0.073s
00:04:14.102  user	0m0.049s
00:04:14.102  sys	0m0.023s
00:04:14.102   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:14.102   03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:04:14.102  ************************************
00:04:14.102  END TEST skip_rpc_with_delay
00:04:14.102  ************************************
00:04:14.102    03:54:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:04:14.102   03:54:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:04:14.102   03:54:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:04:14.102   03:54:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:14.102   03:54:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:14.102   03:54:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:14.102  ************************************
00:04:14.102  START TEST exit_on_failed_rpc_init
00:04:14.102  ************************************
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=104842
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 104842
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 104842 ']'
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:14.102  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:14.102   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:04:14.102  [2024-12-09 03:54:42.585802] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:14.102  [2024-12-09 03:54:42.585883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104842 ]
00:04:14.102  [2024-12-09 03:54:42.650039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:14.360  [2024-12-09 03:54:42.709447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:14.617   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:14.617   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:04:14.617   03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:14.617   03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:04:14.617   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:04:14.617   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:04:14.617   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:14.618   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:14.618    03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:14.618   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:14.618    03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:14.618   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:14.618   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:14.618   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:04:14.618   03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:04:14.618  [2024-12-09 03:54:43.025442] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:14.618  [2024-12-09 03:54:43.025517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104967 ]
00:04:14.618  [2024-12-09 03:54:43.091308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:14.618  [2024-12-09 03:54:43.149877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:14.618  [2024-12-09 03:54:43.150013] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:04:14.618  [2024-12-09 03:54:43.150034] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:04:14.618  [2024-12-09 03:54:43.150045] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 104842
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 104842 ']'
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 104842
00:04:14.875    03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:14.875    03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104842
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104842'
00:04:14.875  killing process with pid 104842
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 104842
00:04:14.875   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 104842
00:04:15.135  
00:04:15.135  real	0m1.152s
00:04:15.135  user	0m1.265s
00:04:15.135  sys	0m0.444s
00:04:15.135   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:15.135   03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:04:15.135  ************************************
00:04:15.135  END TEST exit_on_failed_rpc_init
00:04:15.135  ************************************
00:04:15.135   03:54:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:04:15.135  
00:04:15.135  real	0m13.585s
00:04:15.135  user	0m12.827s
00:04:15.135  sys	0m1.673s
00:04:15.135   03:54:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:15.135   03:54:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:15.135  ************************************
00:04:15.135  END TEST skip_rpc
00:04:15.135  ************************************
00:04:15.394   03:54:43  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:04:15.394   03:54:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:15.394   03:54:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:15.394   03:54:43  -- common/autotest_common.sh@10 -- # set +x
00:04:15.394  ************************************
00:04:15.394  START TEST rpc_client
00:04:15.394  ************************************
00:04:15.394   03:54:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:04:15.394  * Looking for test storage...
00:04:15.394  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client
00:04:15.394    03:54:43 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:15.394     03:54:43 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:04:15.394     03:54:43 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:15.394    03:54:43 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@345 -- # : 1
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@353 -- # local d=1
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@355 -- # echo 1
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@353 -- # local d=2
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:15.394     03:54:43 rpc_client -- scripts/common.sh@355 -- # echo 2
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:15.394    03:54:43 rpc_client -- scripts/common.sh@368 -- # return 0
00:04:15.394    03:54:43 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:15.394    03:54:43 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:15.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.394  		--rc genhtml_branch_coverage=1
00:04:15.395  		--rc genhtml_function_coverage=1
00:04:15.395  		--rc genhtml_legend=1
00:04:15.395  		--rc geninfo_all_blocks=1
00:04:15.395  		--rc geninfo_unexecuted_blocks=1
00:04:15.395  		
00:04:15.395  		'
00:04:15.395    03:54:43 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:15.395  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.395  		--rc genhtml_branch_coverage=1
00:04:15.395  		--rc genhtml_function_coverage=1
00:04:15.395  		--rc genhtml_legend=1
00:04:15.395  		--rc geninfo_all_blocks=1
00:04:15.395  		--rc geninfo_unexecuted_blocks=1
00:04:15.395  		
00:04:15.395  		'
00:04:15.395    03:54:43 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:15.395  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.395  		--rc genhtml_branch_coverage=1
00:04:15.395  		--rc genhtml_function_coverage=1
00:04:15.395  		--rc genhtml_legend=1
00:04:15.395  		--rc geninfo_all_blocks=1
00:04:15.395  		--rc geninfo_unexecuted_blocks=1
00:04:15.395  		
00:04:15.395  		'
00:04:15.395    03:54:43 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:15.395  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.395  		--rc genhtml_branch_coverage=1
00:04:15.395  		--rc genhtml_function_coverage=1
00:04:15.395  		--rc genhtml_legend=1
00:04:15.395  		--rc geninfo_all_blocks=1
00:04:15.395  		--rc geninfo_unexecuted_blocks=1
00:04:15.395  		
00:04:15.395  		'
00:04:15.395   03:54:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:04:15.395  OK
00:04:15.395   03:54:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:04:15.395  
00:04:15.395  real	0m0.165s
00:04:15.395  user	0m0.104s
00:04:15.395  sys	0m0.069s
00:04:15.395   03:54:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:15.395   03:54:43 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:04:15.395  ************************************
00:04:15.395  END TEST rpc_client
00:04:15.395  ************************************
00:04:15.395   03:54:43  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh
00:04:15.395   03:54:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:15.395   03:54:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:15.395   03:54:43  -- common/autotest_common.sh@10 -- # set +x
00:04:15.395  ************************************
00:04:15.395  START TEST json_config
00:04:15.395  ************************************
00:04:15.395   03:54:43 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh
00:04:15.654    03:54:44 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:15.654     03:54:44 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:04:15.654     03:54:44 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:15.654    03:54:44 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:15.654    03:54:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:15.654    03:54:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:15.654    03:54:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:15.654    03:54:44 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:04:15.654    03:54:44 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:04:15.654    03:54:44 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:04:15.654    03:54:44 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:04:15.654    03:54:44 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:04:15.654    03:54:44 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:04:15.654    03:54:44 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:04:15.654    03:54:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:15.654    03:54:44 json_config -- scripts/common.sh@344 -- # case "$op" in
00:04:15.654    03:54:44 json_config -- scripts/common.sh@345 -- # : 1
00:04:15.654    03:54:44 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:15.654    03:54:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:15.654     03:54:44 json_config -- scripts/common.sh@365 -- # decimal 1
00:04:15.654     03:54:44 json_config -- scripts/common.sh@353 -- # local d=1
00:04:15.654     03:54:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:15.654     03:54:44 json_config -- scripts/common.sh@355 -- # echo 1
00:04:15.654    03:54:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:04:15.654     03:54:44 json_config -- scripts/common.sh@366 -- # decimal 2
00:04:15.654     03:54:44 json_config -- scripts/common.sh@353 -- # local d=2
00:04:15.654     03:54:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:15.654     03:54:44 json_config -- scripts/common.sh@355 -- # echo 2
00:04:15.654    03:54:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:04:15.654    03:54:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:15.654    03:54:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:15.654    03:54:44 json_config -- scripts/common.sh@368 -- # return 0
00:04:15.654    03:54:44 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:15.654    03:54:44 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:15.654  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.654  		--rc genhtml_branch_coverage=1
00:04:15.654  		--rc genhtml_function_coverage=1
00:04:15.654  		--rc genhtml_legend=1
00:04:15.654  		--rc geninfo_all_blocks=1
00:04:15.654  		--rc geninfo_unexecuted_blocks=1
00:04:15.654  		
00:04:15.654  		'
00:04:15.654    03:54:44 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:15.654  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.654  		--rc genhtml_branch_coverage=1
00:04:15.654  		--rc genhtml_function_coverage=1
00:04:15.654  		--rc genhtml_legend=1
00:04:15.654  		--rc geninfo_all_blocks=1
00:04:15.654  		--rc geninfo_unexecuted_blocks=1
00:04:15.654  		
00:04:15.654  		'
00:04:15.654    03:54:44 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:15.654  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.654  		--rc genhtml_branch_coverage=1
00:04:15.654  		--rc genhtml_function_coverage=1
00:04:15.654  		--rc genhtml_legend=1
00:04:15.654  		--rc geninfo_all_blocks=1
00:04:15.654  		--rc geninfo_unexecuted_blocks=1
00:04:15.654  		
00:04:15.654  		'
00:04:15.654    03:54:44 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:15.654  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.654  		--rc genhtml_branch_coverage=1
00:04:15.654  		--rc genhtml_function_coverage=1
00:04:15.654  		--rc genhtml_legend=1
00:04:15.654  		--rc geninfo_all_blocks=1
00:04:15.654  		--rc geninfo_unexecuted_blocks=1
00:04:15.654  		
00:04:15.654  		'
00:04:15.654   03:54:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:04:15.654     03:54:44 json_config -- nvmf/common.sh@7 -- # uname -s
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:15.654     03:54:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:04:15.654     03:54:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:04:15.654     03:54:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:15.654     03:54:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:15.654     03:54:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:15.654      03:54:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.654      03:54:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.654      03:54:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.654      03:54:44 json_config -- paths/export.sh@5 -- # export PATH
00:04:15.654      03:54:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@51 -- # : 0
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:04:15.654    03:54:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:15.655    03:54:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:15.655    03:54:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:04:15.655  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:04:15.655    03:54:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:04:15.655    03:54:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:04:15.655    03:54:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json')
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:04:15.655  INFO: JSON configuration test init
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:15.655   03:54:44 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:04:15.655   03:54:44 json_config -- json_config/common.sh@9 -- # local app=target
00:04:15.655   03:54:44 json_config -- json_config/common.sh@10 -- # shift
00:04:15.655   03:54:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:15.655   03:54:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:15.655   03:54:44 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:04:15.655   03:54:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:15.655   03:54:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:15.655   03:54:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=105227
00:04:15.655   03:54:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:04:15.655   03:54:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:15.655  Waiting for target to run...
00:04:15.655   03:54:44 json_config -- json_config/common.sh@25 -- # waitforlisten 105227 /var/tmp/spdk_tgt.sock
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 105227 ']'
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:15.655  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:15.655   03:54:44 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:15.655  [2024-12-09 03:54:44.174166] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:15.655  [2024-12-09 03:54:44.174267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105227 ]
00:04:16.221  [2024-12-09 03:54:44.499828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:16.221  [2024-12-09 03:54:44.542630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:16.787   03:54:45 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:16.787   03:54:45 json_config -- common/autotest_common.sh@868 -- # return 0
00:04:16.787   03:54:45 json_config -- json_config/common.sh@26 -- # echo ''
00:04:16.787  
00:04:16.787   03:54:45 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:04:16.787   03:54:45 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:04:16.787   03:54:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:16.787   03:54:45 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:16.787   03:54:45 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:04:16.787   03:54:45 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:04:16.787   03:54:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:16.787   03:54:45 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:16.787   03:54:45 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:04:16.787   03:54:45 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:04:16.787   03:54:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:04:20.068   03:54:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:20.068   03:54:48 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:04:20.068    03:54:48 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:04:20.068    03:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:04:20.068    03:54:48 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@51 -- # local get_types
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:04:20.068    03:54:48 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:04:20.068    03:54:48 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:04:20.068    03:54:48 json_config -- json_config/json_config.sh@54 -- # sort
00:04:20.068    03:54:48 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:04:20.068   03:54:48 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:04:20.068   03:54:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:20.069   03:54:48 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@62 -- # return 0
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]]
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]]
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config
00:04:20.326   03:54:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:20.326   03:54:48 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]]
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]]
00:04:20.326   03:54:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0
00:04:20.326   03:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0
00:04:20.584  MallocForNvmf0
00:04:20.584   03:54:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1
00:04:20.584   03:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1
00:04:20.842  MallocForNvmf1
00:04:20.842   03:54:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0
00:04:20.842   03:54:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0
00:04:21.100  [2024-12-09 03:54:49.451077] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:21.100   03:54:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:04:21.100   03:54:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:04:21.358   03:54:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:04:21.358   03:54:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:04:21.615   03:54:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:04:21.615   03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:04:21.874   03:54:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:04:21.874   03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:04:22.132  [2024-12-09 03:54:50.538710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:04:22.132   03:54:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config
00:04:22.132   03:54:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:22.132   03:54:50 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:22.132   03:54:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:04:22.132   03:54:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:22.132   03:54:50 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:22.132   03:54:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:04:22.132   03:54:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:04:22.132   03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:04:22.390  MallocBdevForConfigChangeCheck
00:04:22.390   03:54:50 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:04:22.390   03:54:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:22.390   03:54:50 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:22.390   03:54:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:04:22.390   03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:22.955   03:54:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:04:22.955  INFO: shutting down applications...
00:04:22.955   03:54:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:04:22.955   03:54:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:04:22.955   03:54:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:04:22.955   03:54:51 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:04:24.855  Calling clear_iscsi_subsystem
00:04:24.856  Calling clear_nvmf_subsystem
00:04:24.856  Calling clear_nbd_subsystem
00:04:24.856  Calling clear_ublk_subsystem
00:04:24.856  Calling clear_vhost_blk_subsystem
00:04:24.856  Calling clear_vhost_scsi_subsystem
00:04:24.856  Calling clear_bdev_subsystem
00:04:24.856   03:54:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py
00:04:24.856   03:54:52 json_config -- json_config/json_config.sh@350 -- # count=100
00:04:24.856   03:54:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:04:24.856   03:54:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:24.856   03:54:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:04:24.856   03:54:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty
00:04:24.856   03:54:53 json_config -- json_config/json_config.sh@352 -- # break
00:04:24.856   03:54:53 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:04:24.856   03:54:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:04:24.856   03:54:53 json_config -- json_config/common.sh@31 -- # local app=target
00:04:24.856   03:54:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:04:24.856   03:54:53 json_config -- json_config/common.sh@35 -- # [[ -n 105227 ]]
00:04:24.856   03:54:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 105227
00:04:24.856   03:54:53 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:04:24.856   03:54:53 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:24.856   03:54:53 json_config -- json_config/common.sh@41 -- # kill -0 105227
00:04:24.856   03:54:53 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:04:25.423   03:54:53 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:04:25.423   03:54:53 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:25.423   03:54:53 json_config -- json_config/common.sh@41 -- # kill -0 105227
00:04:25.423   03:54:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:04:25.423   03:54:53 json_config -- json_config/common.sh@43 -- # break
00:04:25.423   03:54:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:04:25.423   03:54:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:04:25.423  SPDK target shutdown done
00:04:25.423   03:54:53 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:04:25.423  INFO: relaunching applications...
00:04:25.423   03:54:53 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:04:25.423   03:54:53 json_config -- json_config/common.sh@9 -- # local app=target
00:04:25.423   03:54:53 json_config -- json_config/common.sh@10 -- # shift
00:04:25.423   03:54:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:25.423   03:54:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:25.423   03:54:53 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:04:25.423   03:54:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:25.423   03:54:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:25.423   03:54:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=106431
00:04:25.423   03:54:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:04:25.423   03:54:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:25.423  Waiting for target to run...
00:04:25.423   03:54:53 json_config -- json_config/common.sh@25 -- # waitforlisten 106431 /var/tmp/spdk_tgt.sock
00:04:25.423   03:54:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 106431 ']'
00:04:25.423   03:54:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:25.423   03:54:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:25.423   03:54:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:25.423  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:25.423   03:54:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:25.423   03:54:53 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:25.423  [2024-12-09 03:54:53.932218] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:25.423  [2024-12-09 03:54:53.932317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106431 ]
00:04:25.990  [2024-12-09 03:54:54.446917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:25.990  [2024-12-09 03:54:54.498660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:29.272  [2024-12-09 03:54:57.554292] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:29.272  [2024-12-09 03:54:57.586751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:04:29.272   03:54:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:29.272   03:54:57 json_config -- common/autotest_common.sh@868 -- # return 0
00:04:29.272   03:54:57 json_config -- json_config/common.sh@26 -- # echo ''
00:04:29.272  
00:04:29.272   03:54:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:04:29.272   03:54:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:04:29.272  INFO: Checking if target configuration is the same...
00:04:29.272   03:54:57 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:04:29.272    03:54:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:04:29.272    03:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:29.272  + '[' 2 -ne 2 ']'
00:04:29.272  +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh
00:04:29.272  ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../..
00:04:29.272  + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:04:29.272  +++ basename /dev/fd/62
00:04:29.272  ++ mktemp /tmp/62.XXX
00:04:29.272  + tmp_file_1=/tmp/62.CXd
00:04:29.272  +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:04:29.272  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:04:29.272  + tmp_file_2=/tmp/spdk_tgt_config.json.uBN
00:04:29.272  + ret=0
00:04:29.272  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:04:29.530  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:04:29.530  + diff -u /tmp/62.CXd /tmp/spdk_tgt_config.json.uBN
00:04:29.530  + echo 'INFO: JSON config files are the same'
00:04:29.530  INFO: JSON config files are the same
00:04:29.530  + rm /tmp/62.CXd /tmp/spdk_tgt_config.json.uBN
00:04:29.530  + exit 0
00:04:29.530   03:54:58 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:04:29.530   03:54:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:04:29.530  INFO: changing configuration and checking if this can be detected...
00:04:29.530   03:54:58 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:04:29.530   03:54:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:04:29.788   03:54:58 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:04:29.788    03:54:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:04:29.788    03:54:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:29.788  + '[' 2 -ne 2 ']'
00:04:29.788  +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh
00:04:29.788  ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../..
00:04:29.788  + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:04:29.788  +++ basename /dev/fd/62
00:04:29.788  ++ mktemp /tmp/62.XXX
00:04:29.788  + tmp_file_1=/tmp/62.S42
00:04:30.046  +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:04:30.046  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:04:30.046  + tmp_file_2=/tmp/spdk_tgt_config.json.udr
00:04:30.046  + ret=0
00:04:30.046  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:04:30.304  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:04:30.304  + diff -u /tmp/62.S42 /tmp/spdk_tgt_config.json.udr
00:04:30.304  + ret=1
00:04:30.304  + echo '=== Start of file: /tmp/62.S42 ==='
00:04:30.304  + cat /tmp/62.S42
00:04:30.304  + echo '=== End of file: /tmp/62.S42 ==='
00:04:30.304  + echo ''
00:04:30.304  + echo '=== Start of file: /tmp/spdk_tgt_config.json.udr ==='
00:04:30.304  + cat /tmp/spdk_tgt_config.json.udr
00:04:30.304  + echo '=== End of file: /tmp/spdk_tgt_config.json.udr ==='
00:04:30.304  + echo ''
00:04:30.304  + rm /tmp/62.S42 /tmp/spdk_tgt_config.json.udr
00:04:30.304  + exit 1
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:04:30.304  INFO: configuration change detected.
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 106431 ]]
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]]
00:04:30.304    03:54:58 json_config -- json_config/json_config.sh@200 -- # uname -s
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:30.304   03:54:58 json_config -- json_config/json_config.sh@330 -- # killprocess 106431
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 106431 ']'
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@958 -- # kill -0 106431
00:04:30.304    03:54:58 json_config -- common/autotest_common.sh@959 -- # uname
00:04:30.304   03:54:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:30.304    03:54:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106431
00:04:30.562   03:54:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:30.562   03:54:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:30.562   03:54:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106431'
00:04:30.562  killing process with pid 106431
00:04:30.562   03:54:58 json_config -- common/autotest_common.sh@973 -- # kill 106431
00:04:30.562   03:54:58 json_config -- common/autotest_common.sh@978 -- # wait 106431
00:04:31.939   03:55:00 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:04:31.939   03:55:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:04:31.939   03:55:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:31.939   03:55:00 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:32.199   03:55:00 json_config -- json_config/json_config.sh@335 -- # return 0
00:04:32.199   03:55:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:04:32.199  INFO: Success
00:04:32.199  
00:04:32.199  real	0m16.567s
00:04:32.199  user	0m18.219s
00:04:32.199  sys	0m2.606s
00:04:32.199   03:55:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:32.199   03:55:00 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:32.199  ************************************
00:04:32.199  END TEST json_config
00:04:32.199  ************************************
00:04:32.199   03:55:00  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:04:32.199   03:55:00  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:32.199   03:55:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:32.199   03:55:00  -- common/autotest_common.sh@10 -- # set +x
00:04:32.199  ************************************
00:04:32.199  START TEST json_config_extra_key
00:04:32.199  ************************************
00:04:32.199   03:55:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:04:32.199    03:55:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:32.199     03:55:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:04:32.199     03:55:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:32.199    03:55:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:32.199    03:55:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:04:32.199    03:55:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:32.199    03:55:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:32.199  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:32.199  		--rc genhtml_branch_coverage=1
00:04:32.199  		--rc genhtml_function_coverage=1
00:04:32.199  		--rc genhtml_legend=1
00:04:32.199  		--rc geninfo_all_blocks=1
00:04:32.199  		--rc geninfo_unexecuted_blocks=1
00:04:32.199  		
00:04:32.199  		'
00:04:32.199    03:55:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:32.199  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:32.199  		--rc genhtml_branch_coverage=1
00:04:32.199  		--rc genhtml_function_coverage=1
00:04:32.199  		--rc genhtml_legend=1
00:04:32.199  		--rc geninfo_all_blocks=1
00:04:32.199  		--rc geninfo_unexecuted_blocks=1
00:04:32.199  		
00:04:32.199  		'
00:04:32.199    03:55:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:32.199  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:32.199  		--rc genhtml_branch_coverage=1
00:04:32.199  		--rc genhtml_function_coverage=1
00:04:32.199  		--rc genhtml_legend=1
00:04:32.199  		--rc geninfo_all_blocks=1
00:04:32.199  		--rc geninfo_unexecuted_blocks=1
00:04:32.199  		
00:04:32.199  		'
00:04:32.199    03:55:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:32.199  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:32.199  		--rc genhtml_branch_coverage=1
00:04:32.199  		--rc genhtml_function_coverage=1
00:04:32.199  		--rc genhtml_legend=1
00:04:32.199  		--rc geninfo_all_blocks=1
00:04:32.199  		--rc geninfo_unexecuted_blocks=1
00:04:32.199  		
00:04:32.199  		'
00:04:32.199   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:04:32.199     03:55:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:32.199     03:55:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:32.199    03:55:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:32.199     03:55:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:32.199      03:55:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:32.200      03:55:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:32.200      03:55:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:32.200      03:55:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:04:32.200      03:55:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:04:32.200  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:04:32.200    03:55:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json')
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:04:32.200  INFO: launching applications...
00:04:32.200   03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=107360
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:32.200  Waiting for target to run...
00:04:32.200   03:55:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 107360 /var/tmp/spdk_tgt.sock
00:04:32.200   03:55:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 107360 ']'
00:04:32.200   03:55:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:32.200   03:55:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:32.200   03:55:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:32.200  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:32.200   03:55:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:32.200   03:55:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:04:32.461  [2024-12-09 03:55:00.786875] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:32.461  [2024-12-09 03:55:00.786953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107360 ]
00:04:33.029  [2024-12-09 03:55:01.298922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:33.029  [2024-12-09 03:55:01.350091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:33.287   03:55:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:33.287   03:55:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:04:33.287  
00:04:33.287   03:55:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:04:33.287  INFO: shutting down applications...
00:04:33.287   03:55:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 107360 ]]
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 107360
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 107360
00:04:33.287   03:55:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:04:33.854   03:55:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:04:33.854   03:55:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:33.854   03:55:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 107360
00:04:33.854   03:55:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:04:33.854   03:55:02 json_config_extra_key -- json_config/common.sh@43 -- # break
00:04:33.854   03:55:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:04:33.854   03:55:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:04:33.854  SPDK target shutdown done
00:04:33.854   03:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:04:33.854  Success
00:04:33.854  
00:04:33.854  real	0m1.689s
00:04:33.854  user	0m1.521s
00:04:33.854  sys	0m0.637s
00:04:33.854   03:55:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:33.854   03:55:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:04:33.854  ************************************
00:04:33.854  END TEST json_config_extra_key
00:04:33.854  ************************************
00:04:33.854   03:55:02  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:33.854   03:55:02  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:33.854   03:55:02  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:33.854   03:55:02  -- common/autotest_common.sh@10 -- # set +x
00:04:33.854  ************************************
00:04:33.854  START TEST alias_rpc
00:04:33.854  ************************************
00:04:33.854   03:55:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:33.854  * Looking for test storage...
00:04:33.854  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc
00:04:33.854    03:55:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:33.854     03:55:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:33.854     03:55:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:34.112    03:55:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@345 -- # : 1
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:34.112     03:55:02 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:34.112    03:55:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:34.113    03:55:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:34.113    03:55:02 alias_rpc -- scripts/common.sh@368 -- # return 0
00:04:34.113    03:55:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:34.113    03:55:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:34.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.113  		--rc genhtml_branch_coverage=1
00:04:34.113  		--rc genhtml_function_coverage=1
00:04:34.113  		--rc genhtml_legend=1
00:04:34.113  		--rc geninfo_all_blocks=1
00:04:34.113  		--rc geninfo_unexecuted_blocks=1
00:04:34.113  		
00:04:34.113  		'
00:04:34.113    03:55:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:34.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.113  		--rc genhtml_branch_coverage=1
00:04:34.113  		--rc genhtml_function_coverage=1
00:04:34.113  		--rc genhtml_legend=1
00:04:34.113  		--rc geninfo_all_blocks=1
00:04:34.113  		--rc geninfo_unexecuted_blocks=1
00:04:34.113  		
00:04:34.113  		'
00:04:34.113    03:55:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:34.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.113  		--rc genhtml_branch_coverage=1
00:04:34.113  		--rc genhtml_function_coverage=1
00:04:34.113  		--rc genhtml_legend=1
00:04:34.113  		--rc geninfo_all_blocks=1
00:04:34.113  		--rc geninfo_unexecuted_blocks=1
00:04:34.113  		
00:04:34.113  		'
00:04:34.113    03:55:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:34.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.113  		--rc genhtml_branch_coverage=1
00:04:34.113  		--rc genhtml_function_coverage=1
00:04:34.113  		--rc genhtml_legend=1
00:04:34.113  		--rc geninfo_all_blocks=1
00:04:34.113  		--rc geninfo_unexecuted_blocks=1
00:04:34.113  		
00:04:34.113  		'
00:04:34.113   03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:04:34.113   03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=107675
00:04:34.113   03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:34.113   03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 107675
00:04:34.113   03:55:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 107675 ']'
00:04:34.113   03:55:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:34.113   03:55:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:34.113   03:55:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:34.113  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:34.113   03:55:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:34.113   03:55:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:34.113  [2024-12-09 03:55:02.521865] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:34.113  [2024-12-09 03:55:02.521964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107675 ]
00:04:34.113  [2024-12-09 03:55:02.587094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:34.113  [2024-12-09 03:55:02.643527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:34.371   03:55:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:34.371   03:55:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:04:34.371   03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i
00:04:34.629   03:55:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 107675
00:04:34.629   03:55:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 107675 ']'
00:04:34.629   03:55:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 107675
00:04:34.629    03:55:03 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:04:34.629   03:55:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:34.629    03:55:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107675
00:04:34.886   03:55:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:34.886   03:55:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:34.886   03:55:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107675'
00:04:34.886  killing process with pid 107675
00:04:34.886   03:55:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 107675
00:04:34.886   03:55:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 107675
00:04:35.145  
00:04:35.145  real	0m1.321s
00:04:35.145  user	0m1.427s
00:04:35.145  sys	0m0.442s
00:04:35.145   03:55:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:35.145   03:55:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:35.145  ************************************
00:04:35.146  END TEST alias_rpc
00:04:35.146  ************************************
00:04:35.146   03:55:03  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:04:35.146   03:55:03  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh
00:04:35.146   03:55:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.146   03:55:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.146   03:55:03  -- common/autotest_common.sh@10 -- # set +x
00:04:35.146  ************************************
00:04:35.146  START TEST spdkcli_tcp
00:04:35.146  ************************************
00:04:35.146   03:55:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh
00:04:35.405  * Looking for test storage...
00:04:35.405  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli
00:04:35.405    03:55:03 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:35.405     03:55:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:04:35.405     03:55:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:35.405    03:55:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:35.405     03:55:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:35.405    03:55:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:04:35.405    03:55:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:35.405    03:55:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:35.406  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.406  		--rc genhtml_branch_coverage=1
00:04:35.406  		--rc genhtml_function_coverage=1
00:04:35.406  		--rc genhtml_legend=1
00:04:35.406  		--rc geninfo_all_blocks=1
00:04:35.406  		--rc geninfo_unexecuted_blocks=1
00:04:35.406  		
00:04:35.406  		'
00:04:35.406    03:55:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:35.406  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.406  		--rc genhtml_branch_coverage=1
00:04:35.406  		--rc genhtml_function_coverage=1
00:04:35.406  		--rc genhtml_legend=1
00:04:35.406  		--rc geninfo_all_blocks=1
00:04:35.406  		--rc geninfo_unexecuted_blocks=1
00:04:35.406  		
00:04:35.406  		'
00:04:35.406    03:55:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:35.406  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.406  		--rc genhtml_branch_coverage=1
00:04:35.406  		--rc genhtml_function_coverage=1
00:04:35.406  		--rc genhtml_legend=1
00:04:35.406  		--rc geninfo_all_blocks=1
00:04:35.406  		--rc geninfo_unexecuted_blocks=1
00:04:35.406  		
00:04:35.406  		'
00:04:35.406    03:55:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:35.406  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.406  		--rc genhtml_branch_coverage=1
00:04:35.406  		--rc genhtml_function_coverage=1
00:04:35.406  		--rc genhtml_legend=1
00:04:35.406  		--rc geninfo_all_blocks=1
00:04:35.406  		--rc geninfo_unexecuted_blocks=1
00:04:35.406  		
00:04:35.406  		'
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh
00:04:35.406    03:55:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:04:35.406    03:55:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=107868
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:04:35.406   03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 107868
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 107868 ']'
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:35.406  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:35.406   03:55:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:35.406  [2024-12-09 03:55:03.909110] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:35.406  [2024-12-09 03:55:03.909198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107868 ]
00:04:35.406  [2024-12-09 03:55:03.975642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:35.664  [2024-12-09 03:55:04.036613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:35.664  [2024-12-09 03:55:04.036618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:35.921   03:55:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:35.922   03:55:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:04:35.922   03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=107994
00:04:35.922   03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:04:35.922   03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:04:36.180  [
00:04:36.180    "bdev_malloc_delete",
00:04:36.180    "bdev_malloc_create",
00:04:36.180    "bdev_null_resize",
00:04:36.180    "bdev_null_delete",
00:04:36.180    "bdev_null_create",
00:04:36.180    "bdev_nvme_cuse_unregister",
00:04:36.180    "bdev_nvme_cuse_register",
00:04:36.180    "bdev_opal_new_user",
00:04:36.180    "bdev_opal_set_lock_state",
00:04:36.180    "bdev_opal_delete",
00:04:36.180    "bdev_opal_get_info",
00:04:36.180    "bdev_opal_create",
00:04:36.180    "bdev_nvme_opal_revert",
00:04:36.180    "bdev_nvme_opal_init",
00:04:36.180    "bdev_nvme_send_cmd",
00:04:36.180    "bdev_nvme_set_keys",
00:04:36.180    "bdev_nvme_get_path_iostat",
00:04:36.180    "bdev_nvme_get_mdns_discovery_info",
00:04:36.180    "bdev_nvme_stop_mdns_discovery",
00:04:36.180    "bdev_nvme_start_mdns_discovery",
00:04:36.180    "bdev_nvme_set_multipath_policy",
00:04:36.180    "bdev_nvme_set_preferred_path",
00:04:36.180    "bdev_nvme_get_io_paths",
00:04:36.180    "bdev_nvme_remove_error_injection",
00:04:36.180    "bdev_nvme_add_error_injection",
00:04:36.180    "bdev_nvme_get_discovery_info",
00:04:36.180    "bdev_nvme_stop_discovery",
00:04:36.180    "bdev_nvme_start_discovery",
00:04:36.180    "bdev_nvme_get_controller_health_info",
00:04:36.180    "bdev_nvme_disable_controller",
00:04:36.180    "bdev_nvme_enable_controller",
00:04:36.180    "bdev_nvme_reset_controller",
00:04:36.180    "bdev_nvme_get_transport_statistics",
00:04:36.180    "bdev_nvme_apply_firmware",
00:04:36.180    "bdev_nvme_detach_controller",
00:04:36.180    "bdev_nvme_get_controllers",
00:04:36.180    "bdev_nvme_attach_controller",
00:04:36.180    "bdev_nvme_set_hotplug",
00:04:36.180    "bdev_nvme_set_options",
00:04:36.180    "bdev_passthru_delete",
00:04:36.180    "bdev_passthru_create",
00:04:36.180    "bdev_lvol_set_parent_bdev",
00:04:36.180    "bdev_lvol_set_parent",
00:04:36.180    "bdev_lvol_check_shallow_copy",
00:04:36.180    "bdev_lvol_start_shallow_copy",
00:04:36.180    "bdev_lvol_grow_lvstore",
00:04:36.180    "bdev_lvol_get_lvols",
00:04:36.180    "bdev_lvol_get_lvstores",
00:04:36.180    "bdev_lvol_delete",
00:04:36.180    "bdev_lvol_set_read_only",
00:04:36.180    "bdev_lvol_resize",
00:04:36.180    "bdev_lvol_decouple_parent",
00:04:36.180    "bdev_lvol_inflate",
00:04:36.180    "bdev_lvol_rename",
00:04:36.180    "bdev_lvol_clone_bdev",
00:04:36.180    "bdev_lvol_clone",
00:04:36.180    "bdev_lvol_snapshot",
00:04:36.180    "bdev_lvol_create",
00:04:36.180    "bdev_lvol_delete_lvstore",
00:04:36.180    "bdev_lvol_rename_lvstore",
00:04:36.180    "bdev_lvol_create_lvstore",
00:04:36.180    "bdev_raid_set_options",
00:04:36.180    "bdev_raid_remove_base_bdev",
00:04:36.180    "bdev_raid_add_base_bdev",
00:04:36.180    "bdev_raid_delete",
00:04:36.180    "bdev_raid_create",
00:04:36.180    "bdev_raid_get_bdevs",
00:04:36.180    "bdev_error_inject_error",
00:04:36.180    "bdev_error_delete",
00:04:36.180    "bdev_error_create",
00:04:36.180    "bdev_split_delete",
00:04:36.180    "bdev_split_create",
00:04:36.180    "bdev_delay_delete",
00:04:36.180    "bdev_delay_create",
00:04:36.180    "bdev_delay_update_latency",
00:04:36.180    "bdev_zone_block_delete",
00:04:36.180    "bdev_zone_block_create",
00:04:36.180    "blobfs_create",
00:04:36.180    "blobfs_detect",
00:04:36.180    "blobfs_set_cache_size",
00:04:36.180    "bdev_aio_delete",
00:04:36.180    "bdev_aio_rescan",
00:04:36.180    "bdev_aio_create",
00:04:36.180    "bdev_ftl_set_property",
00:04:36.180    "bdev_ftl_get_properties",
00:04:36.180    "bdev_ftl_get_stats",
00:04:36.180    "bdev_ftl_unmap",
00:04:36.180    "bdev_ftl_unload",
00:04:36.180    "bdev_ftl_delete",
00:04:36.180    "bdev_ftl_load",
00:04:36.180    "bdev_ftl_create",
00:04:36.180    "bdev_virtio_attach_controller",
00:04:36.180    "bdev_virtio_scsi_get_devices",
00:04:36.180    "bdev_virtio_detach_controller",
00:04:36.180    "bdev_virtio_blk_set_hotplug",
00:04:36.180    "bdev_iscsi_delete",
00:04:36.180    "bdev_iscsi_create",
00:04:36.180    "bdev_iscsi_set_options",
00:04:36.180    "accel_error_inject_error",
00:04:36.180    "ioat_scan_accel_module",
00:04:36.180    "dsa_scan_accel_module",
00:04:36.180    "iaa_scan_accel_module",
00:04:36.180    "vfu_virtio_create_fs_endpoint",
00:04:36.180    "vfu_virtio_create_scsi_endpoint",
00:04:36.180    "vfu_virtio_scsi_remove_target",
00:04:36.180    "vfu_virtio_scsi_add_target",
00:04:36.180    "vfu_virtio_create_blk_endpoint",
00:04:36.180    "vfu_virtio_delete_endpoint",
00:04:36.180    "keyring_file_remove_key",
00:04:36.180    "keyring_file_add_key",
00:04:36.180    "keyring_linux_set_options",
00:04:36.180    "fsdev_aio_delete",
00:04:36.180    "fsdev_aio_create",
00:04:36.180    "iscsi_get_histogram",
00:04:36.180    "iscsi_enable_histogram",
00:04:36.180    "iscsi_set_options",
00:04:36.180    "iscsi_get_auth_groups",
00:04:36.180    "iscsi_auth_group_remove_secret",
00:04:36.180    "iscsi_auth_group_add_secret",
00:04:36.180    "iscsi_delete_auth_group",
00:04:36.180    "iscsi_create_auth_group",
00:04:36.180    "iscsi_set_discovery_auth",
00:04:36.180    "iscsi_get_options",
00:04:36.180    "iscsi_target_node_request_logout",
00:04:36.180    "iscsi_target_node_set_redirect",
00:04:36.180    "iscsi_target_node_set_auth",
00:04:36.180    "iscsi_target_node_add_lun",
00:04:36.180    "iscsi_get_stats",
00:04:36.180    "iscsi_get_connections",
00:04:36.180    "iscsi_portal_group_set_auth",
00:04:36.180    "iscsi_start_portal_group",
00:04:36.180    "iscsi_delete_portal_group",
00:04:36.180    "iscsi_create_portal_group",
00:04:36.180    "iscsi_get_portal_groups",
00:04:36.180    "iscsi_delete_target_node",
00:04:36.180    "iscsi_target_node_remove_pg_ig_maps",
00:04:36.180    "iscsi_target_node_add_pg_ig_maps",
00:04:36.180    "iscsi_create_target_node",
00:04:36.180    "iscsi_get_target_nodes",
00:04:36.180    "iscsi_delete_initiator_group",
00:04:36.180    "iscsi_initiator_group_remove_initiators",
00:04:36.180    "iscsi_initiator_group_add_initiators",
00:04:36.180    "iscsi_create_initiator_group",
00:04:36.180    "iscsi_get_initiator_groups",
00:04:36.180    "nvmf_set_crdt",
00:04:36.180    "nvmf_set_config",
00:04:36.180    "nvmf_set_max_subsystems",
00:04:36.180    "nvmf_stop_mdns_prr",
00:04:36.180    "nvmf_publish_mdns_prr",
00:04:36.180    "nvmf_subsystem_get_listeners",
00:04:36.180    "nvmf_subsystem_get_qpairs",
00:04:36.180    "nvmf_subsystem_get_controllers",
00:04:36.180    "nvmf_get_stats",
00:04:36.180    "nvmf_get_transports",
00:04:36.180    "nvmf_create_transport",
00:04:36.180    "nvmf_get_targets",
00:04:36.180    "nvmf_delete_target",
00:04:36.180    "nvmf_create_target",
00:04:36.180    "nvmf_subsystem_allow_any_host",
00:04:36.180    "nvmf_subsystem_set_keys",
00:04:36.180    "nvmf_subsystem_remove_host",
00:04:36.180    "nvmf_subsystem_add_host",
00:04:36.180    "nvmf_ns_remove_host",
00:04:36.180    "nvmf_ns_add_host",
00:04:36.180    "nvmf_subsystem_remove_ns",
00:04:36.180    "nvmf_subsystem_set_ns_ana_group",
00:04:36.180    "nvmf_subsystem_add_ns",
00:04:36.180    "nvmf_subsystem_listener_set_ana_state",
00:04:36.180    "nvmf_discovery_get_referrals",
00:04:36.180    "nvmf_discovery_remove_referral",
00:04:36.180    "nvmf_discovery_add_referral",
00:04:36.180    "nvmf_subsystem_remove_listener",
00:04:36.180    "nvmf_subsystem_add_listener",
00:04:36.180    "nvmf_delete_subsystem",
00:04:36.180    "nvmf_create_subsystem",
00:04:36.180    "nvmf_get_subsystems",
00:04:36.180    "env_dpdk_get_mem_stats",
00:04:36.180    "nbd_get_disks",
00:04:36.180    "nbd_stop_disk",
00:04:36.180    "nbd_start_disk",
00:04:36.180    "ublk_recover_disk",
00:04:36.180    "ublk_get_disks",
00:04:36.180    "ublk_stop_disk",
00:04:36.180    "ublk_start_disk",
00:04:36.180    "ublk_destroy_target",
00:04:36.180    "ublk_create_target",
00:04:36.180    "virtio_blk_create_transport",
00:04:36.180    "virtio_blk_get_transports",
00:04:36.180    "vhost_controller_set_coalescing",
00:04:36.180    "vhost_get_controllers",
00:04:36.180    "vhost_delete_controller",
00:04:36.180    "vhost_create_blk_controller",
00:04:36.180    "vhost_scsi_controller_remove_target",
00:04:36.180    "vhost_scsi_controller_add_target",
00:04:36.180    "vhost_start_scsi_controller",
00:04:36.180    "vhost_create_scsi_controller",
00:04:36.180    "thread_set_cpumask",
00:04:36.180    "scheduler_set_options",
00:04:36.180    "framework_get_governor",
00:04:36.180    "framework_get_scheduler",
00:04:36.180    "framework_set_scheduler",
00:04:36.180    "framework_get_reactors",
00:04:36.180    "thread_get_io_channels",
00:04:36.180    "thread_get_pollers",
00:04:36.180    "thread_get_stats",
00:04:36.180    "framework_monitor_context_switch",
00:04:36.180    "spdk_kill_instance",
00:04:36.180    "log_enable_timestamps",
00:04:36.180    "log_get_flags",
00:04:36.180    "log_clear_flag",
00:04:36.180    "log_set_flag",
00:04:36.180    "log_get_level",
00:04:36.180    "log_set_level",
00:04:36.180    "log_get_print_level",
00:04:36.180    "log_set_print_level",
00:04:36.180    "framework_enable_cpumask_locks",
00:04:36.180    "framework_disable_cpumask_locks",
00:04:36.180    "framework_wait_init",
00:04:36.180    "framework_start_init",
00:04:36.180    "scsi_get_devices",
00:04:36.180    "bdev_get_histogram",
00:04:36.180    "bdev_enable_histogram",
00:04:36.180    "bdev_set_qos_limit",
00:04:36.180    "bdev_set_qd_sampling_period",
00:04:36.180    "bdev_get_bdevs",
00:04:36.180    "bdev_reset_iostat",
00:04:36.180    "bdev_get_iostat",
00:04:36.180    "bdev_examine",
00:04:36.180    "bdev_wait_for_examine",
00:04:36.180    "bdev_set_options",
00:04:36.180    "accel_get_stats",
00:04:36.180    "accel_set_options",
00:04:36.180    "accel_set_driver",
00:04:36.180    "accel_crypto_key_destroy",
00:04:36.180    "accel_crypto_keys_get",
00:04:36.180    "accel_crypto_key_create",
00:04:36.180    "accel_assign_opc",
00:04:36.180    "accel_get_module_info",
00:04:36.180    "accel_get_opc_assignments",
00:04:36.180    "vmd_rescan",
00:04:36.180    "vmd_remove_device",
00:04:36.180    "vmd_enable",
00:04:36.180    "sock_get_default_impl",
00:04:36.180    "sock_set_default_impl",
00:04:36.180    "sock_impl_set_options",
00:04:36.180    "sock_impl_get_options",
00:04:36.180    "iobuf_get_stats",
00:04:36.180    "iobuf_set_options",
00:04:36.180    "keyring_get_keys",
00:04:36.180    "vfu_tgt_set_base_path",
00:04:36.180    "framework_get_pci_devices",
00:04:36.180    "framework_get_config",
00:04:36.180    "framework_get_subsystems",
00:04:36.180    "fsdev_set_opts",
00:04:36.180    "fsdev_get_opts",
00:04:36.180    "trace_get_info",
00:04:36.180    "trace_get_tpoint_group_mask",
00:04:36.180    "trace_disable_tpoint_group",
00:04:36.180    "trace_enable_tpoint_group",
00:04:36.180    "trace_clear_tpoint_mask",
00:04:36.180    "trace_set_tpoint_mask",
00:04:36.180    "notify_get_notifications",
00:04:36.180    "notify_get_types",
00:04:36.180    "spdk_get_version",
00:04:36.180    "rpc_get_methods"
00:04:36.180  ]
00:04:36.180   03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:36.180   03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:04:36.180   03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 107868
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 107868 ']'
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 107868
00:04:36.180    03:55:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:36.180    03:55:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107868
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107868'
00:04:36.180  killing process with pid 107868
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 107868
00:04:36.180   03:55:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 107868
00:04:36.745  
00:04:36.745  real	0m1.349s
00:04:36.745  user	0m2.399s
00:04:36.745  sys	0m0.476s
00:04:36.745   03:55:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:36.745   03:55:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:36.745  ************************************
00:04:36.745  END TEST spdkcli_tcp
00:04:36.745  ************************************
00:04:36.745   03:55:05  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:36.745   03:55:05  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:36.745   03:55:05  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:36.745   03:55:05  -- common/autotest_common.sh@10 -- # set +x
00:04:36.745  ************************************
00:04:36.745  START TEST dpdk_mem_utility
00:04:36.745  ************************************
00:04:36.745   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:36.745  * Looking for test storage...
00:04:36.745  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility
00:04:36.745    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:36.745     03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:04:36.745     03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:36.745    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:36.745    03:55:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:36.746     03:55:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:36.746    03:55:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:04:36.746    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:36.746    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:36.746  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:36.746  		--rc genhtml_branch_coverage=1
00:04:36.746  		--rc genhtml_function_coverage=1
00:04:36.746  		--rc genhtml_legend=1
00:04:36.746  		--rc geninfo_all_blocks=1
00:04:36.746  		--rc geninfo_unexecuted_blocks=1
00:04:36.746  		
00:04:36.746  		'
00:04:36.746    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:36.746  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:36.746  		--rc genhtml_branch_coverage=1
00:04:36.746  		--rc genhtml_function_coverage=1
00:04:36.746  		--rc genhtml_legend=1
00:04:36.746  		--rc geninfo_all_blocks=1
00:04:36.746  		--rc geninfo_unexecuted_blocks=1
00:04:36.746  		
00:04:36.746  		'
00:04:36.746    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:36.746  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:36.746  		--rc genhtml_branch_coverage=1
00:04:36.746  		--rc genhtml_function_coverage=1
00:04:36.746  		--rc genhtml_legend=1
00:04:36.746  		--rc geninfo_all_blocks=1
00:04:36.746  		--rc geninfo_unexecuted_blocks=1
00:04:36.746  		
00:04:36.746  		'
00:04:36.746    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:36.746  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:36.746  		--rc genhtml_branch_coverage=1
00:04:36.746  		--rc genhtml_function_coverage=1
00:04:36.746  		--rc genhtml_legend=1
00:04:36.746  		--rc geninfo_all_blocks=1
00:04:36.746  		--rc geninfo_unexecuted_blocks=1
00:04:36.746  		
00:04:36.746  		'
00:04:36.746   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:04:36.746   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=108105
00:04:36.746   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:04:36.746   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 108105
00:04:36.746   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 108105 ']'
00:04:36.746   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:36.746   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:36.746   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:36.746  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:36.746   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:36.746   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:36.746  [2024-12-09 03:55:05.293951] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:36.746  [2024-12-09 03:55:05.294062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108105 ]
00:04:37.004  [2024-12-09 03:55:05.361357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:37.004  [2024-12-09 03:55:05.418607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:37.262   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:37.262   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:04:37.262   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:04:37.262   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:04:37.262   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.262   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:37.262  {
00:04:37.262  "filename": "/tmp/spdk_mem_dump.txt"
00:04:37.262  }
00:04:37.262   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.262   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:04:37.262  DPDK memory size 818.000000 MiB in 1 heap(s)
00:04:37.262  1 heaps totaling size 818.000000 MiB
00:04:37.262    size:  818.000000 MiB heap id: 0
00:04:37.262  end heaps----------
00:04:37.262  9 mempools totaling size 603.782043 MiB
00:04:37.262    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:04:37.262    size:  158.602051 MiB name: PDU_data_out_Pool
00:04:37.262    size:  100.555481 MiB name: bdev_io_108105
00:04:37.262    size:   50.003479 MiB name: msgpool_108105
00:04:37.262    size:   36.509338 MiB name: fsdev_io_108105
00:04:37.262    size:   21.763794 MiB name: PDU_Pool
00:04:37.262    size:   19.513306 MiB name: SCSI_TASK_Pool
00:04:37.262    size:    4.133484 MiB name: evtpool_108105
00:04:37.262    size:    0.026123 MiB name: Session_Pool
00:04:37.262  end mempools-------
00:04:37.262  6 memzones totaling size 4.142822 MiB
00:04:37.262    size:    1.000366 MiB name: RG_ring_0_108105
00:04:37.262    size:    1.000366 MiB name: RG_ring_1_108105
00:04:37.262    size:    1.000366 MiB name: RG_ring_4_108105
00:04:37.262    size:    1.000366 MiB name: RG_ring_5_108105
00:04:37.262    size:    0.125366 MiB name: RG_ring_2_108105
00:04:37.262    size:    0.015991 MiB name: RG_ring_3_108105
00:04:37.262  end memzones-------
00:04:37.262   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:04:37.262  heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15
00:04:37.262    list of free elements. size: 10.852478 MiB
00:04:37.262      element at address: 0x200019200000 with size:    0.999878 MiB
00:04:37.262      element at address: 0x200019400000 with size:    0.999878 MiB
00:04:37.262      element at address: 0x200000400000 with size:    0.998535 MiB
00:04:37.262      element at address: 0x200032000000 with size:    0.994446 MiB
00:04:37.262      element at address: 0x200006400000 with size:    0.959839 MiB
00:04:37.262      element at address: 0x200012c00000 with size:    0.944275 MiB
00:04:37.262      element at address: 0x200019600000 with size:    0.936584 MiB
00:04:37.262      element at address: 0x200000200000 with size:    0.717346 MiB
00:04:37.262      element at address: 0x20001ae00000 with size:    0.582886 MiB
00:04:37.262      element at address: 0x200000c00000 with size:    0.495422 MiB
00:04:37.262      element at address: 0x20000a600000 with size:    0.490723 MiB
00:04:37.262      element at address: 0x200019800000 with size:    0.485657 MiB
00:04:37.262      element at address: 0x200003e00000 with size:    0.481934 MiB
00:04:37.262      element at address: 0x200028200000 with size:    0.410034 MiB
00:04:37.262      element at address: 0x200000800000 with size:    0.355042 MiB
00:04:37.262    list of standard malloc elements. size: 199.218628 MiB
00:04:37.262      element at address: 0x20000a7fff80 with size:  132.000122 MiB
00:04:37.262      element at address: 0x2000065fff80 with size:   64.000122 MiB
00:04:37.262      element at address: 0x2000192fff80 with size:    1.000122 MiB
00:04:37.262      element at address: 0x2000194fff80 with size:    1.000122 MiB
00:04:37.262      element at address: 0x2000196fff80 with size:    1.000122 MiB
00:04:37.262      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:04:37.262      element at address: 0x2000196eff00 with size:    0.062622 MiB
00:04:37.262      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:04:37.262      element at address: 0x2000196efdc0 with size:    0.000305 MiB
00:04:37.262      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000004ffa00 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000004ffac0 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000004ffb80 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000004ffd80 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000004ffe40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000085ae40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000085b040 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000085f300 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000087f5c0 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000087f680 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000008ff940 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000008ffb40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200000c7ed40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200000cff000 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200000cff0c0 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200003e7b600 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200003e7b6c0 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200003efb980 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000064fdd80 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000a67da00 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000a67dac0 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20000a6fdd80 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200012cf1bc0 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000196efc40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000196efd00 with size:    0.000183 MiB
00:04:37.262      element at address: 0x2000198bc740 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20001ae95380 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20001ae95440 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200028268f80 with size:    0.000183 MiB
00:04:37.262      element at address: 0x200028269040 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20002826fc40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20002826fe40 with size:    0.000183 MiB
00:04:37.262      element at address: 0x20002826ff00 with size:    0.000183 MiB
00:04:37.262    list of memzone associated elements. size: 607.928894 MiB
00:04:37.262      element at address: 0x20001ae95500 with size:  211.416748 MiB
00:04:37.262        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:04:37.262      element at address: 0x20002826ffc0 with size:  157.562561 MiB
00:04:37.262        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:04:37.262      element at address: 0x200012df1e80 with size:  100.055054 MiB
00:04:37.262        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_108105_0
00:04:37.262      element at address: 0x200000dff380 with size:   48.003052 MiB
00:04:37.262        associated memzone info: size:   48.002930 MiB name: MP_msgpool_108105_0
00:04:37.262      element at address: 0x200003ffdb80 with size:   36.008911 MiB
00:04:37.262        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_108105_0
00:04:37.262      element at address: 0x2000199be940 with size:   20.255554 MiB
00:04:37.262        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:04:37.262      element at address: 0x2000321feb40 with size:   18.005066 MiB
00:04:37.262        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:04:37.262      element at address: 0x2000004fff00 with size:    3.000244 MiB
00:04:37.262        associated memzone info: size:    3.000122 MiB name: MP_evtpool_108105_0
00:04:37.262      element at address: 0x2000009ffe00 with size:    2.000488 MiB
00:04:37.262        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_108105
00:04:37.262      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:04:37.262        associated memzone info: size:    1.007996 MiB name: MP_evtpool_108105
00:04:37.262      element at address: 0x20000a6fde40 with size:    1.008118 MiB
00:04:37.262        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:04:37.262      element at address: 0x2000198bc800 with size:    1.008118 MiB
00:04:37.262        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:04:37.262      element at address: 0x2000064fde40 with size:    1.008118 MiB
00:04:37.262        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:04:37.263      element at address: 0x200003efba40 with size:    1.008118 MiB
00:04:37.263        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:04:37.263      element at address: 0x200000cff180 with size:    1.000488 MiB
00:04:37.263        associated memzone info: size:    1.000366 MiB name: RG_ring_0_108105
00:04:37.263      element at address: 0x2000008ffc00 with size:    1.000488 MiB
00:04:37.263        associated memzone info: size:    1.000366 MiB name: RG_ring_1_108105
00:04:37.263      element at address: 0x200012cf1c80 with size:    1.000488 MiB
00:04:37.263        associated memzone info: size:    1.000366 MiB name: RG_ring_4_108105
00:04:37.263      element at address: 0x2000320fe940 with size:    1.000488 MiB
00:04:37.263        associated memzone info: size:    1.000366 MiB name: RG_ring_5_108105
00:04:37.263      element at address: 0x20000087f740 with size:    0.500488 MiB
00:04:37.263        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_108105
00:04:37.263      element at address: 0x200000c7ee00 with size:    0.500488 MiB
00:04:37.263        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_108105
00:04:37.263      element at address: 0x20000a67db80 with size:    0.500488 MiB
00:04:37.263        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:04:37.263      element at address: 0x200003e7b780 with size:    0.500488 MiB
00:04:37.263        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:04:37.263      element at address: 0x20001987c540 with size:    0.250488 MiB
00:04:37.263        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:04:37.263      element at address: 0x2000002b7a40 with size:    0.125488 MiB
00:04:37.263        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_108105
00:04:37.263      element at address: 0x20000085f3c0 with size:    0.125488 MiB
00:04:37.263        associated memzone info: size:    0.125366 MiB name: RG_ring_2_108105
00:04:37.263      element at address: 0x2000064f5b80 with size:    0.031738 MiB
00:04:37.263        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:04:37.263      element at address: 0x200028269100 with size:    0.023743 MiB
00:04:37.263        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:04:37.263      element at address: 0x20000085b100 with size:    0.016113 MiB
00:04:37.263        associated memzone info: size:    0.015991 MiB name: RG_ring_3_108105
00:04:37.263      element at address: 0x20002826f240 with size:    0.002441 MiB
00:04:37.263        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:04:37.263      element at address: 0x2000004ffc40 with size:    0.000305 MiB
00:04:37.263        associated memzone info: size:    0.000183 MiB name: MP_msgpool_108105
00:04:37.263      element at address: 0x2000008ffa00 with size:    0.000305 MiB
00:04:37.263        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_108105
00:04:37.263      element at address: 0x20000085af00 with size:    0.000305 MiB
00:04:37.263        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_108105
00:04:37.263      element at address: 0x20002826fd00 with size:    0.000305 MiB
00:04:37.263        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:04:37.263   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:04:37.263   03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 108105
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 108105 ']'
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 108105
00:04:37.263    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:37.263    03:55:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108105
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108105'
00:04:37.263  killing process with pid 108105
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 108105
00:04:37.263   03:55:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 108105
00:04:37.827  
00:04:37.827  real	0m1.152s
00:04:37.827  user	0m1.125s
00:04:37.827  sys	0m0.425s
00:04:37.827   03:55:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:37.827   03:55:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:37.827  ************************************
00:04:37.827  END TEST dpdk_mem_utility
00:04:37.827  ************************************
00:04:37.828   03:55:06  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh
00:04:37.828   03:55:06  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:37.828   03:55:06  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:37.828   03:55:06  -- common/autotest_common.sh@10 -- # set +x
00:04:37.828  ************************************
00:04:37.828  START TEST event
00:04:37.828  ************************************
00:04:37.828   03:55:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh
00:04:37.828  * Looking for test storage...
00:04:37.828  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event
00:04:37.828    03:55:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:37.828     03:55:06 event -- common/autotest_common.sh@1711 -- # lcov --version
00:04:37.828     03:55:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:38.086    03:55:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:38.086    03:55:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:38.086    03:55:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:38.086    03:55:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:38.086    03:55:06 event -- scripts/common.sh@336 -- # IFS=.-:
00:04:38.086    03:55:06 event -- scripts/common.sh@336 -- # read -ra ver1
00:04:38.086    03:55:06 event -- scripts/common.sh@337 -- # IFS=.-:
00:04:38.086    03:55:06 event -- scripts/common.sh@337 -- # read -ra ver2
00:04:38.086    03:55:06 event -- scripts/common.sh@338 -- # local 'op=<'
00:04:38.086    03:55:06 event -- scripts/common.sh@340 -- # ver1_l=2
00:04:38.086    03:55:06 event -- scripts/common.sh@341 -- # ver2_l=1
00:04:38.086    03:55:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:38.086    03:55:06 event -- scripts/common.sh@344 -- # case "$op" in
00:04:38.086    03:55:06 event -- scripts/common.sh@345 -- # : 1
00:04:38.086    03:55:06 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:38.086    03:55:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:38.086     03:55:06 event -- scripts/common.sh@365 -- # decimal 1
00:04:38.086     03:55:06 event -- scripts/common.sh@353 -- # local d=1
00:04:38.086     03:55:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:38.086     03:55:06 event -- scripts/common.sh@355 -- # echo 1
00:04:38.086    03:55:06 event -- scripts/common.sh@365 -- # ver1[v]=1
00:04:38.086     03:55:06 event -- scripts/common.sh@366 -- # decimal 2
00:04:38.086     03:55:06 event -- scripts/common.sh@353 -- # local d=2
00:04:38.086     03:55:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:38.086     03:55:06 event -- scripts/common.sh@355 -- # echo 2
00:04:38.086    03:55:06 event -- scripts/common.sh@366 -- # ver2[v]=2
00:04:38.086    03:55:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:38.086    03:55:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:38.086    03:55:06 event -- scripts/common.sh@368 -- # return 0
00:04:38.086    03:55:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:38.086    03:55:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:38.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.086  		--rc genhtml_branch_coverage=1
00:04:38.086  		--rc genhtml_function_coverage=1
00:04:38.086  		--rc genhtml_legend=1
00:04:38.086  		--rc geninfo_all_blocks=1
00:04:38.086  		--rc geninfo_unexecuted_blocks=1
00:04:38.086  		
00:04:38.086  		'
00:04:38.086    03:55:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:38.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.086  		--rc genhtml_branch_coverage=1
00:04:38.086  		--rc genhtml_function_coverage=1
00:04:38.086  		--rc genhtml_legend=1
00:04:38.086  		--rc geninfo_all_blocks=1
00:04:38.086  		--rc geninfo_unexecuted_blocks=1
00:04:38.086  		
00:04:38.086  		'
00:04:38.086    03:55:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:38.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.086  		--rc genhtml_branch_coverage=1
00:04:38.086  		--rc genhtml_function_coverage=1
00:04:38.086  		--rc genhtml_legend=1
00:04:38.086  		--rc geninfo_all_blocks=1
00:04:38.086  		--rc geninfo_unexecuted_blocks=1
00:04:38.086  		
00:04:38.086  		'
00:04:38.086    03:55:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:38.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.086  		--rc genhtml_branch_coverage=1
00:04:38.086  		--rc genhtml_function_coverage=1
00:04:38.086  		--rc genhtml_legend=1
00:04:38.086  		--rc geninfo_all_blocks=1
00:04:38.086  		--rc geninfo_unexecuted_blocks=1
00:04:38.086  		
00:04:38.086  		'
00:04:38.086   03:55:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh
00:04:38.086    03:55:06 event -- bdev/nbd_common.sh@6 -- # set -e
00:04:38.087   03:55:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:38.087   03:55:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:04:38.087   03:55:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.087   03:55:06 event -- common/autotest_common.sh@10 -- # set +x
00:04:38.087  ************************************
00:04:38.087  START TEST event_perf
00:04:38.087  ************************************
00:04:38.087   03:55:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:38.087  Running I/O for 1 seconds...[2024-12-09 03:55:06.471239] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:38.087  [2024-12-09 03:55:06.471321] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108400 ]
00:04:38.087  [2024-12-09 03:55:06.537156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:38.087  [2024-12-09 03:55:06.601035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:38.087  [2024-12-09 03:55:06.601098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:04:38.087  [2024-12-09 03:55:06.601168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:04:38.087  [2024-12-09 03:55:06.601171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:39.459  Running I/O for 1 seconds...
00:04:39.459  lcore  0:   228940
00:04:39.459  lcore  1:   228940
00:04:39.459  lcore  2:   228939
00:04:39.459  lcore  3:   228940
00:04:39.459  done.
00:04:39.459  
00:04:39.459  real	0m1.208s
00:04:39.459  user	0m4.133s
00:04:39.459  sys	0m0.070s
00:04:39.459   03:55:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:39.459   03:55:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:04:39.459  ************************************
00:04:39.459  END TEST event_perf
00:04:39.459  ************************************
00:04:39.459   03:55:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:04:39.459   03:55:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:04:39.459   03:55:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:39.459   03:55:07 event -- common/autotest_common.sh@10 -- # set +x
00:04:39.459  ************************************
00:04:39.459  START TEST event_reactor
00:04:39.459  ************************************
00:04:39.459   03:55:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:04:39.459  [2024-12-09 03:55:07.732549] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:39.459  [2024-12-09 03:55:07.732623] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108557 ]
00:04:39.459  [2024-12-09 03:55:07.799247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:39.459  [2024-12-09 03:55:07.853922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:40.393  test_start
00:04:40.393  oneshot
00:04:40.393  tick 100
00:04:40.393  tick 100
00:04:40.393  tick 250
00:04:40.393  tick 100
00:04:40.393  tick 100
00:04:40.393  tick 100
00:04:40.393  tick 250
00:04:40.393  tick 500
00:04:40.393  tick 100
00:04:40.393  tick 100
00:04:40.393  tick 250
00:04:40.393  tick 100
00:04:40.393  tick 100
00:04:40.393  test_end
00:04:40.393  
00:04:40.393  real	0m1.199s
00:04:40.393  user	0m1.137s
00:04:40.393  sys	0m0.058s
00:04:40.393   03:55:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:40.393   03:55:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:04:40.393  ************************************
00:04:40.393  END TEST event_reactor
00:04:40.393  ************************************
00:04:40.393   03:55:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:40.393   03:55:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:04:40.393   03:55:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:40.393   03:55:08 event -- common/autotest_common.sh@10 -- # set +x
00:04:40.652  ************************************
00:04:40.652  START TEST event_reactor_perf
00:04:40.652  ************************************
00:04:40.652   03:55:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:40.652  [2024-12-09 03:55:08.984746] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:40.652  [2024-12-09 03:55:08.984813] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108709 ]
00:04:40.652  [2024-12-09 03:55:09.050391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:40.652  [2024-12-09 03:55:09.103752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:42.026  test_start
00:04:42.026  test_end
00:04:42.026  Performance:   444393 events per second
00:04:42.026  
00:04:42.026  real	0m1.199s
00:04:42.026  user	0m1.121s
00:04:42.026  sys	0m0.072s
00:04:42.026   03:55:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:42.026   03:55:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:04:42.026  ************************************
00:04:42.026  END TEST event_reactor_perf
00:04:42.026  ************************************
00:04:42.026    03:55:10 event -- event/event.sh@49 -- # uname -s
00:04:42.026   03:55:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:04:42.026   03:55:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:04:42.026   03:55:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:42.026   03:55:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:42.026   03:55:10 event -- common/autotest_common.sh@10 -- # set +x
00:04:42.026  ************************************
00:04:42.026  START TEST event_scheduler
00:04:42.026  ************************************
00:04:42.026   03:55:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:04:42.026  * Looking for test storage...
00:04:42.026  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler
00:04:42.026    03:55:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:42.026     03:55:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:04:42.026     03:55:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:42.026    03:55:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:42.027     03:55:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:42.027    03:55:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:04:42.027    03:55:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:42.027    03:55:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:42.027  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.027  		--rc genhtml_branch_coverage=1
00:04:42.027  		--rc genhtml_function_coverage=1
00:04:42.027  		--rc genhtml_legend=1
00:04:42.027  		--rc geninfo_all_blocks=1
00:04:42.027  		--rc geninfo_unexecuted_blocks=1
00:04:42.027  		
00:04:42.027  		'
00:04:42.027    03:55:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:42.027  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.027  		--rc genhtml_branch_coverage=1
00:04:42.027  		--rc genhtml_function_coverage=1
00:04:42.027  		--rc genhtml_legend=1
00:04:42.027  		--rc geninfo_all_blocks=1
00:04:42.027  		--rc geninfo_unexecuted_blocks=1
00:04:42.027  		
00:04:42.027  		'
00:04:42.027    03:55:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:42.027  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.027  		--rc genhtml_branch_coverage=1
00:04:42.027  		--rc genhtml_function_coverage=1
00:04:42.027  		--rc genhtml_legend=1
00:04:42.027  		--rc geninfo_all_blocks=1
00:04:42.027  		--rc geninfo_unexecuted_blocks=1
00:04:42.027  		
00:04:42.027  		'
00:04:42.027    03:55:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:42.027  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.027  		--rc genhtml_branch_coverage=1
00:04:42.027  		--rc genhtml_function_coverage=1
00:04:42.027  		--rc genhtml_legend=1
00:04:42.027  		--rc geninfo_all_blocks=1
00:04:42.027  		--rc geninfo_unexecuted_blocks=1
00:04:42.027  		
00:04:42.027  		'
00:04:42.027   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:04:42.027   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=108901
00:04:42.027   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:04:42.027   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:04:42.027   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 108901
00:04:42.027   03:55:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 108901 ']'
00:04:42.027   03:55:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:42.027   03:55:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:42.027   03:55:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:42.027  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:42.027   03:55:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:42.027   03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:42.027  [2024-12-09 03:55:10.422841] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:42.027  [2024-12-09 03:55:10.422941] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108901 ]
00:04:42.027  [2024-12-09 03:55:10.492899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:42.027  [2024-12-09 03:55:10.556229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:42.027  [2024-12-09 03:55:10.556295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:42.027  [2024-12-09 03:55:10.556357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:04:42.027  [2024-12-09 03:55:10.556361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:04:42.286   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:42.286  [2024-12-09 03:55:10.673355] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:04:42.286  [2024-12-09 03:55:10.673381] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:04:42.286  [2024-12-09 03:55:10.673398] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:04:42.286  [2024-12-09 03:55:10.673409] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:04:42.286  [2024-12-09 03:55:10.673419] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.286   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:42.286  [2024-12-09 03:55:10.771111] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.286   03:55:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:42.286   03:55:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:42.287  ************************************
00:04:42.287  START TEST scheduler_create_thread
00:04:42.287  ************************************
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.287  2
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.287  3
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.287  4
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.287  5
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.287  6
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.287  7
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.287   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.546  8
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.546  9
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.546  10
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:42.546    03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.546   03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:43.113   03:55:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.113  
00:04:43.113  real	0m0.592s
00:04:43.113  user	0m0.010s
00:04:43.113  sys	0m0.005s
00:04:43.113   03:55:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:43.113   03:55:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:43.113  ************************************
00:04:43.113  END TEST scheduler_create_thread
00:04:43.113  ************************************
00:04:43.113   03:55:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:04:43.113   03:55:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 108901
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 108901 ']'
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 108901
00:04:43.113    03:55:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:43.113    03:55:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108901
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108901'
00:04:43.113  killing process with pid 108901
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 108901
00:04:43.113   03:55:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 108901
00:04:43.371  [2024-12-09 03:55:11.875327] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:04:43.631  
00:04:43.631  real	0m1.865s
00:04:43.631  user	0m2.580s
00:04:43.631  sys	0m0.367s
00:04:43.631   03:55:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:43.631   03:55:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:43.631  ************************************
00:04:43.631  END TEST event_scheduler
00:04:43.631  ************************************
00:04:43.631   03:55:12 event -- event/event.sh@51 -- # modprobe -n nbd
00:04:43.631   03:55:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:04:43.631   03:55:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:43.631   03:55:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:43.631   03:55:12 event -- common/autotest_common.sh@10 -- # set +x
00:04:43.631  ************************************
00:04:43.631  START TEST app_repeat
00:04:43.631  ************************************
00:04:43.631   03:55:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=109211
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 109211'
00:04:43.631  Process app_repeat pid: 109211
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:04:43.631  spdk_app_start Round 0
00:04:43.631   03:55:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock
00:04:43.631   03:55:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']'
00:04:43.631   03:55:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:43.631   03:55:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:43.631   03:55:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:43.631  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:43.631   03:55:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:43.631   03:55:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:43.631  [2024-12-09 03:55:12.176197] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:04:43.631  [2024-12-09 03:55:12.176268] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109211 ]
00:04:43.890  [2024-12-09 03:55:12.242002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:43.890  [2024-12-09 03:55:12.297315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:43.890  [2024-12-09 03:55:12.297319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:43.890   03:55:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:43.890   03:55:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:43.890   03:55:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:44.149  Malloc0
00:04:44.149   03:55:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:44.408  Malloc1
00:04:44.666   03:55:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:44.666   03:55:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:44.666   03:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:44.666   03:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:44.666   03:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:44.666   03:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:44.666   03:55:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:44.666   03:55:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:44.666   03:55:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:44.924  /dev/nbd0
00:04:44.924    03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:44.924   03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:44.924  1+0 records in
00:04:44.924  1+0 records out
00:04:44.924  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176952 s, 23.1 MB/s
00:04:44.924    03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:44.924   03:55:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:44.924   03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:44.924   03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:44.924   03:55:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:45.183  /dev/nbd1
00:04:45.183    03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:45.183   03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:45.183  1+0 records in
00:04:45.183  1+0 records out
00:04:45.183  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219963 s, 18.6 MB/s
00:04:45.183    03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:45.183   03:55:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:45.183   03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:45.183   03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:45.183    03:55:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:45.183    03:55:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:45.183     03:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:45.441    03:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:45.441    {
00:04:45.441      "nbd_device": "/dev/nbd0",
00:04:45.441      "bdev_name": "Malloc0"
00:04:45.441    },
00:04:45.441    {
00:04:45.441      "nbd_device": "/dev/nbd1",
00:04:45.441      "bdev_name": "Malloc1"
00:04:45.441    }
00:04:45.441  ]'
00:04:45.441     03:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:45.441    {
00:04:45.441      "nbd_device": "/dev/nbd0",
00:04:45.441      "bdev_name": "Malloc0"
00:04:45.441    },
00:04:45.441    {
00:04:45.441      "nbd_device": "/dev/nbd1",
00:04:45.441      "bdev_name": "Malloc1"
00:04:45.441    }
00:04:45.441  ]'
00:04:45.441     03:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:45.441    03:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:45.441  /dev/nbd1'
00:04:45.441     03:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:45.441  /dev/nbd1'
00:04:45.441     03:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:45.441    03:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:45.441    03:55:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:45.441  256+0 records in
00:04:45.441  256+0 records out
00:04:45.441  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504387 s, 208 MB/s
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:45.441   03:55:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:45.441  256+0 records in
00:04:45.441  256+0 records out
00:04:45.441  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212683 s, 49.3 MB/s
00:04:45.441   03:55:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:45.441   03:55:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:45.700  256+0 records in
00:04:45.700  256+0 records out
00:04:45.700  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235782 s, 44.5 MB/s
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:45.700   03:55:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:45.958    03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:45.958   03:55:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:46.216    03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:46.216   03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:46.216   03:55:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:46.216   03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:46.216   03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:46.216   03:55:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:46.216   03:55:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:46.216   03:55:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:46.216    03:55:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:46.216    03:55:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:46.216     03:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:46.474    03:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:46.474     03:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:46.474     03:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:46.474    03:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:46.474     03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:46.474     03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:46.474     03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:46.474    03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:46.474    03:55:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:46.474   03:55:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:46.474   03:55:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:46.474   03:55:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:46.474   03:55:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:46.731   03:55:15 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:46.987  [2024-12-09 03:55:15.443309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:46.987  [2024-12-09 03:55:15.497801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:46.987  [2024-12-09 03:55:15.497801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:46.987  [2024-12-09 03:55:15.554314] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:46.987  [2024-12-09 03:55:15.554396] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:50.269   03:55:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:50.269   03:55:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:04:50.269  spdk_app_start Round 1
00:04:50.269   03:55:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']'
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:50.269  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:50.269   03:55:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:50.269   03:55:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:50.269  Malloc0
00:04:50.269   03:55:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:50.529  Malloc1
00:04:50.529   03:55:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:50.529   03:55:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:51.096  /dev/nbd0
00:04:51.096    03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:51.096   03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:51.096  1+0 records in
00:04:51.096  1+0 records out
00:04:51.096  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177867 s, 23.0 MB/s
00:04:51.096    03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:51.096   03:55:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:51.096   03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:51.096   03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:51.096   03:55:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:51.354  /dev/nbd1
00:04:51.354    03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:51.354   03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:51.354  1+0 records in
00:04:51.354  1+0 records out
00:04:51.354  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225209 s, 18.2 MB/s
00:04:51.354    03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:51.354   03:55:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:51.354   03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:51.354   03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:51.354    03:55:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:51.354    03:55:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:51.354     03:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:51.613    03:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:51.613    {
00:04:51.613      "nbd_device": "/dev/nbd0",
00:04:51.613      "bdev_name": "Malloc0"
00:04:51.613    },
00:04:51.613    {
00:04:51.613      "nbd_device": "/dev/nbd1",
00:04:51.613      "bdev_name": "Malloc1"
00:04:51.613    }
00:04:51.613  ]'
00:04:51.613     03:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:51.613    {
00:04:51.613      "nbd_device": "/dev/nbd0",
00:04:51.613      "bdev_name": "Malloc0"
00:04:51.613    },
00:04:51.613    {
00:04:51.613      "nbd_device": "/dev/nbd1",
00:04:51.613      "bdev_name": "Malloc1"
00:04:51.613    }
00:04:51.613  ]'
00:04:51.613     03:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:51.613    03:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:51.613  /dev/nbd1'
00:04:51.613     03:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:51.613  /dev/nbd1'
00:04:51.613     03:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:51.613    03:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:51.613    03:55:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:51.613  256+0 records in
00:04:51.613  256+0 records out
00:04:51.613  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496244 s, 211 MB/s
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:51.613  256+0 records in
00:04:51.613  256+0 records out
00:04:51.613  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209895 s, 50.0 MB/s
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:51.613  256+0 records in
00:04:51.613  256+0 records out
00:04:51.613  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251976 s, 41.6 MB/s
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:51.613   03:55:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:51.871    03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:51.871   03:55:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:52.438    03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:52.438   03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:52.438   03:55:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:52.438   03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:52.438   03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:52.438   03:55:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:52.438   03:55:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:52.438   03:55:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:52.438    03:55:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:52.438    03:55:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:52.438     03:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:52.438    03:55:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:52.438     03:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:52.438     03:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:52.696    03:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:52.696     03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:52.696     03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:52.696     03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:52.696    03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:52.696    03:55:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:52.696   03:55:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:52.696   03:55:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:52.696   03:55:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:52.696   03:55:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:52.955   03:55:21 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:53.214  [2024-12-09 03:55:21.564038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:53.214  [2024-12-09 03:55:21.622114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:53.214  [2024-12-09 03:55:21.622114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:53.214  [2024-12-09 03:55:21.676310] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:53.214  [2024-12-09 03:55:21.676383] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:56.493   03:55:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:56.493   03:55:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:04:56.493  spdk_app_start Round 2
00:04:56.493   03:55:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']'
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:56.493  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:56.493   03:55:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:56.493   03:55:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:56.493  Malloc0
00:04:56.493   03:55:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:56.750  Malloc1
00:04:56.750   03:55:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:56.750   03:55:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:57.006  /dev/nbd0
00:04:57.006    03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:57.006   03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:57.006  1+0 records in
00:04:57.006  1+0 records out
00:04:57.006  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170438 s, 24.0 MB/s
00:04:57.006    03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:57.006   03:55:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:57.006   03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:57.006   03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:57.006   03:55:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:57.263  /dev/nbd1
00:04:57.263    03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:57.263   03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:57.263   03:55:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:57.263   03:55:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:57.263   03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:57.263   03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:57.263   03:55:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:57.520  1+0 records in
00:04:57.520  1+0 records out
00:04:57.520  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204611 s, 20.0 MB/s
00:04:57.520    03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:57.520   03:55:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:57.520   03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:57.520   03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:57.520    03:55:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:57.520    03:55:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:57.520     03:55:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:57.777    03:55:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:57.777    {
00:04:57.777      "nbd_device": "/dev/nbd0",
00:04:57.777      "bdev_name": "Malloc0"
00:04:57.777    },
00:04:57.777    {
00:04:57.777      "nbd_device": "/dev/nbd1",
00:04:57.777      "bdev_name": "Malloc1"
00:04:57.777    }
00:04:57.777  ]'
00:04:57.777     03:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:57.777    {
00:04:57.777      "nbd_device": "/dev/nbd0",
00:04:57.777      "bdev_name": "Malloc0"
00:04:57.777    },
00:04:57.777    {
00:04:57.777      "nbd_device": "/dev/nbd1",
00:04:57.777      "bdev_name": "Malloc1"
00:04:57.777    }
00:04:57.777  ]'
00:04:57.777     03:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:57.777    03:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:57.777  /dev/nbd1'
00:04:57.777     03:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:57.777  /dev/nbd1'
00:04:57.777     03:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:57.777    03:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:57.777    03:55:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:57.777  256+0 records in
00:04:57.777  256+0 records out
00:04:57.777  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480454 s, 218 MB/s
00:04:57.777   03:55:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:57.778  256+0 records in
00:04:57.778  256+0 records out
00:04:57.778  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232653 s, 45.1 MB/s
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:57.778  256+0 records in
00:04:57.778  256+0 records out
00:04:57.778  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232975 s, 45.0 MB/s
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:57.778   03:55:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:58.035    03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:58.035   03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:58.035   03:55:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:58.035   03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:58.035   03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:58.035   03:55:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:58.035   03:55:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:58.036   03:55:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:58.036   03:55:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:58.036   03:55:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:58.293    03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:58.293   03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:58.293   03:55:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:58.293   03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:58.293   03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:58.293   03:55:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:58.293   03:55:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:58.293   03:55:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:58.293    03:55:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:58.293    03:55:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:58.293     03:55:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:58.551    03:55:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:58.551     03:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:58.551     03:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:58.809    03:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:58.809     03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:58.809     03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:58.809     03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:58.809    03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:58.809    03:55:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:58.809   03:55:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:58.809   03:55:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:58.809   03:55:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:58.810   03:55:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:59.068   03:55:27 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:59.326  [2024-12-09 03:55:27.651927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:59.326  [2024-12-09 03:55:27.706928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:59.326  [2024-12-09 03:55:27.706932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:59.326  [2024-12-09 03:55:27.763677] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:59.326  [2024-12-09 03:55:27.763745] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:02.612   03:55:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']'
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:02.612  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:02.612   03:55:30 event.app_repeat -- event/event.sh@39 -- # killprocess 109211
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 109211 ']'
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 109211
00:05:02.612    03:55:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:02.612    03:55:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109211
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109211'
00:05:02.612  killing process with pid 109211
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 109211
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 109211
00:05:02.612  spdk_app_start is called in Round 0.
00:05:02.612  Shutdown signal received, stop current app iteration
00:05:02.612  Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 reinitialization...
00:05:02.612  spdk_app_start is called in Round 1.
00:05:02.612  Shutdown signal received, stop current app iteration
00:05:02.612  Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 reinitialization...
00:05:02.612  spdk_app_start is called in Round 2.
00:05:02.612  Shutdown signal received, stop current app iteration
00:05:02.612  Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 reinitialization...
00:05:02.612  spdk_app_start is called in Round 3.
00:05:02.612  Shutdown signal received, stop current app iteration
00:05:02.612   03:55:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:05:02.612   03:55:30 event.app_repeat -- event/event.sh@42 -- # return 0
00:05:02.612  
00:05:02.612  real	0m18.777s
00:05:02.612  user	0m41.469s
00:05:02.612  sys	0m3.278s
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:02.612   03:55:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:02.612  ************************************
00:05:02.612  END TEST app_repeat
00:05:02.612  ************************************
00:05:02.612   03:55:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:05:02.612   03:55:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh
00:05:02.612   03:55:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:02.612   03:55:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:02.612   03:55:30 event -- common/autotest_common.sh@10 -- # set +x
00:05:02.612  ************************************
00:05:02.612  START TEST cpu_locks
00:05:02.612  ************************************
00:05:02.612   03:55:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh
00:05:02.612  * Looking for test storage...
00:05:02.612  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event
00:05:02.612    03:55:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:02.612     03:55:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:05:02.612     03:55:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:02.612    03:55:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:02.612     03:55:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:02.612    03:55:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:05:02.612    03:55:31 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:02.612    03:55:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:02.612  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.612  		--rc genhtml_branch_coverage=1
00:05:02.612  		--rc genhtml_function_coverage=1
00:05:02.612  		--rc genhtml_legend=1
00:05:02.612  		--rc geninfo_all_blocks=1
00:05:02.612  		--rc geninfo_unexecuted_blocks=1
00:05:02.612  		
00:05:02.612  		'
00:05:02.612    03:55:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:02.612  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.612  		--rc genhtml_branch_coverage=1
00:05:02.612  		--rc genhtml_function_coverage=1
00:05:02.612  		--rc genhtml_legend=1
00:05:02.612  		--rc geninfo_all_blocks=1
00:05:02.612  		--rc geninfo_unexecuted_blocks=1
00:05:02.612  		
00:05:02.612  		'
00:05:02.612    03:55:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:02.612  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.612  		--rc genhtml_branch_coverage=1
00:05:02.612  		--rc genhtml_function_coverage=1
00:05:02.612  		--rc genhtml_legend=1
00:05:02.612  		--rc geninfo_all_blocks=1
00:05:02.612  		--rc geninfo_unexecuted_blocks=1
00:05:02.612  		
00:05:02.612  		'
00:05:02.612    03:55:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:02.612  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.612  		--rc genhtml_branch_coverage=1
00:05:02.612  		--rc genhtml_function_coverage=1
00:05:02.612  		--rc genhtml_legend=1
00:05:02.612  		--rc geninfo_all_blocks=1
00:05:02.612  		--rc geninfo_unexecuted_blocks=1
00:05:02.612  		
00:05:02.612  		'
00:05:02.612   03:55:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:05:02.612   03:55:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:05:02.612   03:55:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:05:02.613   03:55:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:05:02.613   03:55:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:02.613   03:55:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:02.613   03:55:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:02.613  ************************************
00:05:02.613  START TEST default_locks
00:05:02.613  ************************************
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=111591
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 111591
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 111591 ']'
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:02.613  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:02.613   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:05:02.872  [2024-12-09 03:55:31.214936] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:02.872  [2024-12-09 03:55:31.215036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111591 ]
00:05:02.872  [2024-12-09 03:55:31.283952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:02.872  [2024-12-09 03:55:31.344668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:03.130   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:03.130   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:05:03.130   03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 111591
00:05:03.130   03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 111591
00:05:03.130   03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:03.388  lslocks: write error
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 111591
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 111591 ']'
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 111591
00:05:03.388    03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:03.388    03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111591
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111591'
00:05:03.388  killing process with pid 111591
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 111591
00:05:03.388   03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 111591
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 111591
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 111591
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:03.954    03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 111591
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 111591 ']'
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:03.954  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:05:03.954  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (111591) - No such process
00:05:03.954  ERROR: process (pid: 111591) is no longer running
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:05:03.954  
00:05:03.954  real	0m1.196s
00:05:03.954  user	0m1.149s
00:05:03.954  sys	0m0.517s
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:03.954   03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:05:03.954  ************************************
00:05:03.954  END TEST default_locks
00:05:03.954  ************************************
00:05:03.954   03:55:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:05:03.954   03:55:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:03.954   03:55:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:03.954   03:55:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:03.954  ************************************
00:05:03.954  START TEST default_locks_via_rpc
00:05:03.954  ************************************
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=111867
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 111867
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 111867 ']'
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:03.954  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:03.954   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:03.954  [2024-12-09 03:55:32.464002] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:03.954  [2024-12-09 03:55:32.464115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111867 ]
00:05:04.212  [2024-12-09 03:55:32.530718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:04.212  [2024-12-09 03:55:32.591046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 111867
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 111867
00:05:04.470   03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 111867
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 111867 ']'
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 111867
00:05:04.727    03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:04.727    03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111867
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111867'
00:05:04.727  killing process with pid 111867
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 111867
00:05:04.727   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 111867
00:05:04.985  
00:05:04.985  real	0m1.148s
00:05:04.985  user	0m1.129s
00:05:04.985  sys	0m0.487s
00:05:04.985   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:04.985   03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:04.985  ************************************
00:05:04.985  END TEST default_locks_via_rpc
00:05:04.985  ************************************
00:05:05.243   03:55:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:05:05.243   03:55:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:05.243   03:55:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.243   03:55:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:05.243  ************************************
00:05:05.243  START TEST non_locking_app_on_locked_coremask
00:05:05.243  ************************************
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=112027
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 112027 /var/tmp/spdk.sock
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112027 ']'
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:05.243  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:05.243   03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:05.243  [2024-12-09 03:55:33.664755] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:05.243  [2024-12-09 03:55:33.664837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112027 ]
00:05:05.243  [2024-12-09 03:55:33.730588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:05.243  [2024-12-09 03:55:33.787026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=112041
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 112041 /var/tmp/spdk2.sock
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112041 ']'
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:05.501  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:05.501   03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:05.759  [2024-12-09 03:55:34.109974] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:05.759  [2024-12-09 03:55:34.110054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112041 ]
00:05:05.759  [2024-12-09 03:55:34.207576] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:05.759  [2024-12-09 03:55:34.207618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:05.759  [2024-12-09 03:55:34.323852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:06.691   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:06.692   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:06.692   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 112027
00:05:06.692   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 112027
00:05:06.692   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:07.257  lslocks: write error
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 112027
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112027 ']'
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 112027
00:05:07.257    03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:07.257    03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112027
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112027'
00:05:07.257  killing process with pid 112027
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 112027
00:05:07.257   03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 112027
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 112041
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112041 ']'
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 112041
00:05:08.191    03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:08.191    03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112041
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112041'
00:05:08.191  killing process with pid 112041
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 112041
00:05:08.191   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 112041
00:05:08.450  
00:05:08.450  real	0m3.257s
00:05:08.450  user	0m3.508s
00:05:08.450  sys	0m1.038s
00:05:08.450   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:08.450   03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:08.450  ************************************
00:05:08.450  END TEST non_locking_app_on_locked_coremask
00:05:08.450  ************************************
00:05:08.450   03:55:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:05:08.450   03:55:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:08.450   03:55:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:08.450   03:55:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:08.450  ************************************
00:05:08.450  START TEST locking_app_on_unlocked_coremask
00:05:08.450  ************************************
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=112431
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 112431 /var/tmp/spdk.sock
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112431 ']'
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:08.450  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:08.450   03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:08.450  [2024-12-09 03:55:36.975172] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:08.450  [2024-12-09 03:55:36.975303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112431 ]
00:05:08.709  [2024-12-09 03:55:37.042968] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:08.709  [2024-12-09 03:55:37.043004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:08.709  [2024-12-09 03:55:37.100503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=112477
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 112477 /var/tmp/spdk2.sock
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112477 ']'
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:08.968  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:08.968   03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:08.968  [2024-12-09 03:55:37.419059] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:08.968  [2024-12-09 03:55:37.419152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112477 ]
00:05:08.968  [2024-12-09 03:55:37.520889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:09.227  [2024-12-09 03:55:37.633367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:10.162   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:10.162   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:10.162   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 112477
00:05:10.162   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 112477
00:05:10.162   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:10.420  lslocks: write error
00:05:10.420   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 112431
00:05:10.420   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112431 ']'
00:05:10.420   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 112431
00:05:10.420    03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:10.420   03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:10.420    03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112431
00:05:10.679   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:10.679   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:10.679   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112431'
00:05:10.679  killing process with pid 112431
00:05:10.679   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 112431
00:05:10.679   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 112431
00:05:11.245   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 112477
00:05:11.245   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112477 ']'
00:05:11.245   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 112477
00:05:11.245    03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:11.245   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:11.245    03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112477
00:05:11.504   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:11.504   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:11.504   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112477'
00:05:11.504  killing process with pid 112477
00:05:11.504   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 112477
00:05:11.504   03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 112477
00:05:11.762  
00:05:11.762  real	0m3.343s
00:05:11.762  user	0m3.587s
00:05:11.762  sys	0m1.067s
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:11.762  ************************************
00:05:11.762  END TEST locking_app_on_unlocked_coremask
00:05:11.762  ************************************
00:05:11.762   03:55:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:05:11.762   03:55:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:11.762   03:55:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:11.762   03:55:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:11.762  ************************************
00:05:11.762  START TEST locking_app_on_locked_coremask
00:05:11.762  ************************************
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=112855
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 112855 /var/tmp/spdk.sock
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112855 ']'
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:11.762  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:11.762   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:12.020  [2024-12-09 03:55:40.371108] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:12.020  [2024-12-09 03:55:40.371187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112855 ]
00:05:12.020  [2024-12-09 03:55:40.439244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:12.020  [2024-12-09 03:55:40.499574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:12.278   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:12.278   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:12.278   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=112913
00:05:12.278   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 112913 /var/tmp/spdk2.sock
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 112913 /var/tmp/spdk2.sock
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:12.279    03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 112913 /var/tmp/spdk2.sock
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112913 ']'
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:12.279  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:12.279   03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:12.279  [2024-12-09 03:55:40.832918] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:12.279  [2024-12-09 03:55:40.832995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112913 ]
00:05:12.537  [2024-12-09 03:55:40.933172] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 112855 has claimed it.
00:05:12.537  [2024-12-09 03:55:40.933238] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:13.102  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (112913) - No such process
00:05:13.102  ERROR: process (pid: 112913) is no longer running
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 112855
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 112855
00:05:13.102   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:13.377  lslocks: write error
00:05:13.377   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 112855
00:05:13.377   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112855 ']'
00:05:13.377   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 112855
00:05:13.377    03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:13.377   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:13.377    03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112855
00:05:13.634   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:13.634   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:13.634   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112855'
00:05:13.634  killing process with pid 112855
00:05:13.634   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 112855
00:05:13.634   03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 112855
00:05:13.893  
00:05:13.893  real	0m2.070s
00:05:13.893  user	0m2.263s
00:05:13.893  sys	0m0.673s
00:05:13.893   03:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:13.893   03:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:13.893  ************************************
00:05:13.893  END TEST locking_app_on_locked_coremask
00:05:13.893  ************************************
00:05:13.893   03:55:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:05:13.893   03:55:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:13.893   03:55:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:13.893   03:55:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:13.893  ************************************
00:05:13.893  START TEST locking_overlapped_coremask
00:05:13.893  ************************************
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=113086
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 113086 /var/tmp/spdk.sock
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 113086 ']'
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:13.893  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:13.893   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:14.153  [2024-12-09 03:55:42.493214] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:14.153  [2024-12-09 03:55:42.493316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113086 ]
00:05:14.153  [2024-12-09 03:55:42.561225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:14.153  [2024-12-09 03:55:42.623330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:14.153  [2024-12-09 03:55:42.623366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:14.153  [2024-12-09 03:55:42.623369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=113211
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 113211 /var/tmp/spdk2.sock
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 113211 /var/tmp/spdk2.sock
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:14.412    03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 113211 /var/tmp/spdk2.sock
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 113211 ']'
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:14.412  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:14.412   03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:14.412  [2024-12-09 03:55:42.961241] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:14.412  [2024-12-09 03:55:42.961343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113211 ]
00:05:14.670  [2024-12-09 03:55:43.066622] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113086 has claimed it.
00:05:14.670  [2024-12-09 03:55:43.066690] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:15.237  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (113211) - No such process
00:05:15.237  ERROR: process (pid: 113211) is no longer running
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 113086
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 113086 ']'
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 113086
00:05:15.237    03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:15.237    03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113086
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113086'
00:05:15.237  killing process with pid 113086
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 113086
00:05:15.237   03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 113086
00:05:15.805  
00:05:15.805  real	0m1.691s
00:05:15.805  user	0m4.698s
00:05:15.805  sys	0m0.474s
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:15.805  ************************************
00:05:15.805  END TEST locking_overlapped_coremask
00:05:15.805  ************************************
00:05:15.805   03:55:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:05:15.805   03:55:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:15.805   03:55:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:15.805   03:55:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:15.805  ************************************
00:05:15.805  START TEST locking_overlapped_coremask_via_rpc
00:05:15.805  ************************************
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=113381
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 113381 /var/tmp/spdk.sock
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113381 ']'
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:15.805  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:15.805   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:15.805  [2024-12-09 03:55:44.234403] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:15.805  [2024-12-09 03:55:44.234485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113381 ]
00:05:15.805  [2024-12-09 03:55:44.297445] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:15.805  [2024-12-09 03:55:44.297475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:15.805  [2024-12-09 03:55:44.352289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:15.805  [2024-12-09 03:55:44.352343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:15.805  [2024-12-09 03:55:44.352348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=113392
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 113392 /var/tmp/spdk2.sock
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113392 ']'
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:16.064  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:16.064   03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:16.323  [2024-12-09 03:55:44.681984] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:16.323  [2024-12-09 03:55:44.682069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113392 ]
00:05:16.323  [2024-12-09 03:55:44.785432] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:16.323  [2024-12-09 03:55:44.785476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:16.581  [2024-12-09 03:55:44.911948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:16.581  [2024-12-09 03:55:44.915330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:05:16.581  [2024-12-09 03:55:44.915333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:17.146    03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:17.146  [2024-12-09 03:55:45.700382] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113381 has claimed it.
00:05:17.146  request:
00:05:17.146  {
00:05:17.146  "method": "framework_enable_cpumask_locks",
00:05:17.146  "req_id": 1
00:05:17.146  }
00:05:17.146  Got JSON-RPC error response
00:05:17.146  response:
00:05:17.146  {
00:05:17.146  "code": -32603,
00:05:17.146  "message": "Failed to claim CPU core: 2"
00:05:17.146  }
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 113381 /var/tmp/spdk.sock
00:05:17.146   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113381 ']'
00:05:17.147   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:17.147   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:17.147   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:17.147  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:17.147   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:17.147   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:17.711   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:17.711   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:17.711   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 113392 /var/tmp/spdk2.sock
00:05:17.711   03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113392 ']'
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:17.711  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:17.711  
00:05:17.711  real	0m2.087s
00:05:17.711  user	0m1.177s
00:05:17.711  sys	0m0.156s
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:17.711   03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:17.711  ************************************
00:05:17.711  END TEST locking_overlapped_coremask_via_rpc
00:05:17.711  ************************************
00:05:17.969   03:55:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:05:17.969   03:55:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 113381 ]]
00:05:17.969   03:55:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 113381
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113381 ']'
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113381
00:05:17.969    03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:17.969    03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113381
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113381'
00:05:17.969  killing process with pid 113381
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 113381
00:05:17.969   03:55:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 113381
00:05:18.227   03:55:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 113392 ]]
00:05:18.227   03:55:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 113392
00:05:18.227   03:55:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113392 ']'
00:05:18.227   03:55:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113392
00:05:18.227    03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:18.227   03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:18.227    03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113392
00:05:18.484   03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:05:18.484   03:55:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:05:18.484   03:55:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113392'
00:05:18.484  killing process with pid 113392
00:05:18.484   03:55:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 113392
00:05:18.484   03:55:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 113392
00:05:18.743   03:55:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:18.743   03:55:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:05:18.743   03:55:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 113381 ]]
00:05:18.743   03:55:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 113381
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113381 ']'
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113381
00:05:18.743  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (113381) - No such process
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 113381 is not found'
00:05:18.743  Process with pid 113381 is not found
00:05:18.743   03:55:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 113392 ]]
00:05:18.743   03:55:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 113392
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113392 ']'
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113392
00:05:18.743  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (113392) - No such process
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 113392 is not found'
00:05:18.743  Process with pid 113392 is not found
00:05:18.743   03:55:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:18.743  
00:05:18.743  real	0m16.251s
00:05:18.743  user	0m29.454s
00:05:18.743  sys	0m5.374s
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:18.743   03:55:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:18.743  ************************************
00:05:18.743  END TEST cpu_locks
00:05:18.743  ************************************
00:05:18.743  
00:05:18.743  real	0m40.958s
00:05:18.743  user	1m20.107s
00:05:18.743  sys	0m9.490s
00:05:18.743   03:55:47 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:18.743   03:55:47 event -- common/autotest_common.sh@10 -- # set +x
00:05:18.743  ************************************
00:05:18.743  END TEST event
00:05:18.743  ************************************
00:05:18.743   03:55:47  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh
00:05:18.743   03:55:47  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:18.743   03:55:47  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:18.743   03:55:47  -- common/autotest_common.sh@10 -- # set +x
00:05:18.743  ************************************
00:05:18.743  START TEST thread
00:05:18.743  ************************************
00:05:18.743   03:55:47 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh
00:05:19.002  * Looking for test storage...
00:05:19.002  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread
00:05:19.002    03:55:47 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:19.002     03:55:47 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:05:19.002     03:55:47 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:19.003    03:55:47 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:19.003    03:55:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:19.003    03:55:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:19.003    03:55:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:19.003    03:55:47 thread -- scripts/common.sh@336 -- # IFS=.-:
00:05:19.003    03:55:47 thread -- scripts/common.sh@336 -- # read -ra ver1
00:05:19.003    03:55:47 thread -- scripts/common.sh@337 -- # IFS=.-:
00:05:19.003    03:55:47 thread -- scripts/common.sh@337 -- # read -ra ver2
00:05:19.003    03:55:47 thread -- scripts/common.sh@338 -- # local 'op=<'
00:05:19.003    03:55:47 thread -- scripts/common.sh@340 -- # ver1_l=2
00:05:19.003    03:55:47 thread -- scripts/common.sh@341 -- # ver2_l=1
00:05:19.003    03:55:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:19.003    03:55:47 thread -- scripts/common.sh@344 -- # case "$op" in
00:05:19.003    03:55:47 thread -- scripts/common.sh@345 -- # : 1
00:05:19.003    03:55:47 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:19.003    03:55:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:19.003     03:55:47 thread -- scripts/common.sh@365 -- # decimal 1
00:05:19.003     03:55:47 thread -- scripts/common.sh@353 -- # local d=1
00:05:19.003     03:55:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:19.003     03:55:47 thread -- scripts/common.sh@355 -- # echo 1
00:05:19.003    03:55:47 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:05:19.003     03:55:47 thread -- scripts/common.sh@366 -- # decimal 2
00:05:19.003     03:55:47 thread -- scripts/common.sh@353 -- # local d=2
00:05:19.003     03:55:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:19.003     03:55:47 thread -- scripts/common.sh@355 -- # echo 2
00:05:19.003    03:55:47 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:05:19.003    03:55:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:19.003    03:55:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:19.003    03:55:47 thread -- scripts/common.sh@368 -- # return 0
00:05:19.003    03:55:47 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:19.003    03:55:47 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:19.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:19.003  		--rc genhtml_branch_coverage=1
00:05:19.003  		--rc genhtml_function_coverage=1
00:05:19.003  		--rc genhtml_legend=1
00:05:19.003  		--rc geninfo_all_blocks=1
00:05:19.003  		--rc geninfo_unexecuted_blocks=1
00:05:19.003  		
00:05:19.003  		'
00:05:19.003    03:55:47 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:19.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:19.003  		--rc genhtml_branch_coverage=1
00:05:19.003  		--rc genhtml_function_coverage=1
00:05:19.003  		--rc genhtml_legend=1
00:05:19.003  		--rc geninfo_all_blocks=1
00:05:19.003  		--rc geninfo_unexecuted_blocks=1
00:05:19.003  		
00:05:19.003  		'
00:05:19.003    03:55:47 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:19.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:19.003  		--rc genhtml_branch_coverage=1
00:05:19.003  		--rc genhtml_function_coverage=1
00:05:19.003  		--rc genhtml_legend=1
00:05:19.003  		--rc geninfo_all_blocks=1
00:05:19.003  		--rc geninfo_unexecuted_blocks=1
00:05:19.003  		
00:05:19.003  		'
00:05:19.003    03:55:47 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:19.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:19.003  		--rc genhtml_branch_coverage=1
00:05:19.003  		--rc genhtml_function_coverage=1
00:05:19.003  		--rc genhtml_legend=1
00:05:19.003  		--rc geninfo_all_blocks=1
00:05:19.003  		--rc geninfo_unexecuted_blocks=1
00:05:19.003  		
00:05:19.003  		'
00:05:19.003   03:55:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:19.003   03:55:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:19.003   03:55:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.003   03:55:47 thread -- common/autotest_common.sh@10 -- # set +x
00:05:19.003  ************************************
00:05:19.003  START TEST thread_poller_perf
00:05:19.003  ************************************
00:05:19.003   03:55:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:19.003  [2024-12-09 03:55:47.487603] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:19.003  [2024-12-09 03:55:47.487669] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113889 ]
00:05:19.003  [2024-12-09 03:55:47.555118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:19.261  [2024-12-09 03:55:47.611962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:19.261  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:05:20.192  
[2024-12-09T02:55:48.768Z]  ======================================
00:05:20.192  
[2024-12-09T02:55:48.768Z]  busy:2713055937 (cyc)
00:05:20.192  
[2024-12-09T02:55:48.768Z]  total_run_count: 367000
00:05:20.192  
[2024-12-09T02:55:48.768Z]  tsc_hz: 2700000000 (cyc)
00:05:20.192  
[2024-12-09T02:55:48.768Z]  ======================================
00:05:20.192  
[2024-12-09T02:55:48.768Z]  poller_cost: 7392 (cyc), 2737 (nsec)
00:05:20.192  
00:05:20.192  real	0m1.208s
00:05:20.192  user	0m1.132s
00:05:20.192  sys	0m0.071s
00:05:20.192   03:55:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:20.192   03:55:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:20.192  ************************************
00:05:20.192  END TEST thread_poller_perf
00:05:20.192  ************************************
00:05:20.192   03:55:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:20.192   03:55:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:20.192   03:55:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:20.192   03:55:48 thread -- common/autotest_common.sh@10 -- # set +x
00:05:20.192  ************************************
00:05:20.192  START TEST thread_poller_perf
00:05:20.192  ************************************
00:05:20.192   03:55:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:20.192  [2024-12-09 03:55:48.751889] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:20.192  [2024-12-09 03:55:48.751955] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114041 ]
00:05:20.449  [2024-12-09 03:55:48.819169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:20.449  [2024-12-09 03:55:48.874404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:20.449  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:05:21.381  
[2024-12-09T02:55:49.957Z]  ======================================
00:05:21.381  
[2024-12-09T02:55:49.957Z]  busy:2702088930 (cyc)
00:05:21.381  
[2024-12-09T02:55:49.957Z]  total_run_count: 4436000
00:05:21.381  
[2024-12-09T02:55:49.957Z]  tsc_hz: 2700000000 (cyc)
00:05:21.381  
[2024-12-09T02:55:49.957Z]  ======================================
00:05:21.381  
[2024-12-09T02:55:49.957Z]  poller_cost: 609 (cyc), 225 (nsec)
00:05:21.381  
00:05:21.381  real	0m1.202s
00:05:21.381  user	0m1.124s
00:05:21.381  sys	0m0.073s
00:05:21.381   03:55:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:21.381   03:55:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:21.381  ************************************
00:05:21.381  END TEST thread_poller_perf
00:05:21.381  ************************************
00:05:21.640   03:55:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:05:21.640  
00:05:21.640  real	0m2.658s
00:05:21.640  user	0m2.398s
00:05:21.640  sys	0m0.265s
00:05:21.640   03:55:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:21.640   03:55:49 thread -- common/autotest_common.sh@10 -- # set +x
00:05:21.640  ************************************
00:05:21.640  END TEST thread
00:05:21.640  ************************************
00:05:21.640   03:55:49  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:05:21.640   03:55:49  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh
00:05:21.640   03:55:49  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:21.640   03:55:49  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:21.640   03:55:49  -- common/autotest_common.sh@10 -- # set +x
00:05:21.640  ************************************
00:05:21.640  START TEST app_cmdline
00:05:21.640  ************************************
00:05:21.640   03:55:50 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh
00:05:21.640  * Looking for test storage...
00:05:21.640  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app
00:05:21.640    03:55:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:21.640     03:55:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:05:21.640     03:55:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:21.640    03:55:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@345 -- # : 1
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:21.640     03:55:50 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:21.640    03:55:50 app_cmdline -- scripts/common.sh@368 -- # return 0
00:05:21.640    03:55:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:21.640    03:55:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:21.640  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:21.640  		--rc genhtml_branch_coverage=1
00:05:21.640  		--rc genhtml_function_coverage=1
00:05:21.640  		--rc genhtml_legend=1
00:05:21.640  		--rc geninfo_all_blocks=1
00:05:21.640  		--rc geninfo_unexecuted_blocks=1
00:05:21.640  		
00:05:21.640  		'
00:05:21.640    03:55:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:21.640  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:21.640  		--rc genhtml_branch_coverage=1
00:05:21.640  		--rc genhtml_function_coverage=1
00:05:21.640  		--rc genhtml_legend=1
00:05:21.640  		--rc geninfo_all_blocks=1
00:05:21.640  		--rc geninfo_unexecuted_blocks=1
00:05:21.640  		
00:05:21.640  		'
00:05:21.640    03:55:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:21.640  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:21.640  		--rc genhtml_branch_coverage=1
00:05:21.640  		--rc genhtml_function_coverage=1
00:05:21.640  		--rc genhtml_legend=1
00:05:21.640  		--rc geninfo_all_blocks=1
00:05:21.640  		--rc geninfo_unexecuted_blocks=1
00:05:21.640  		
00:05:21.640  		'
00:05:21.640    03:55:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:21.640  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:21.640  		--rc genhtml_branch_coverage=1
00:05:21.640  		--rc genhtml_function_coverage=1
00:05:21.640  		--rc genhtml_legend=1
00:05:21.640  		--rc geninfo_all_blocks=1
00:05:21.640  		--rc geninfo_unexecuted_blocks=1
00:05:21.640  		
00:05:21.640  		'
00:05:21.640   03:55:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:05:21.640   03:55:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=114250
00:05:21.640   03:55:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:05:21.640   03:55:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 114250
00:05:21.640   03:55:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 114250 ']'
00:05:21.640   03:55:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:21.640   03:55:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:21.640   03:55:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:21.640  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:21.640   03:55:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:21.640   03:55:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:21.898  [2024-12-09 03:55:50.217448] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:21.899  [2024-12-09 03:55:50.217552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114250 ]
00:05:21.899  [2024-12-09 03:55:50.287859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:21.899  [2024-12-09 03:55:50.349267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:22.157   03:55:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:22.157   03:55:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:05:22.157   03:55:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:05:22.417  {
00:05:22.417    "version": "SPDK v25.01-pre git sha1 c4269c6e2",
00:05:22.417    "fields": {
00:05:22.417      "major": 25,
00:05:22.417      "minor": 1,
00:05:22.417      "patch": 0,
00:05:22.417      "suffix": "-pre",
00:05:22.417      "commit": "c4269c6e2"
00:05:22.417    }
00:05:22.417  }
00:05:22.417   03:55:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:05:22.417   03:55:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:05:22.417   03:55:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:05:22.417   03:55:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:05:22.417    03:55:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:05:22.417    03:55:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:22.417    03:55:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:22.417    03:55:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:05:22.417    03:55:50 app_cmdline -- app/cmdline.sh@26 -- # sort
00:05:22.417    03:55:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:22.417   03:55:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:05:22.417   03:55:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:05:22.417   03:55:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:22.417    03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:22.417    03:55:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:05:22.417   03:55:50 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:22.676  request:
00:05:22.676  {
00:05:22.676    "method": "env_dpdk_get_mem_stats",
00:05:22.676    "req_id": 1
00:05:22.676  }
00:05:22.676  Got JSON-RPC error response
00:05:22.676  response:
00:05:22.676  {
00:05:22.676    "code": -32601,
00:05:22.676    "message": "Method not found"
00:05:22.676  }
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:22.676   03:55:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 114250
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 114250 ']'
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 114250
00:05:22.676    03:55:51 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:22.676    03:55:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114250
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114250'
00:05:22.676  killing process with pid 114250
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 114250
00:05:22.676   03:55:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 114250
00:05:23.242  
00:05:23.242  real	0m1.604s
00:05:23.242  user	0m1.990s
00:05:23.242  sys	0m0.480s
00:05:23.242   03:55:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:23.242   03:55:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:23.243  ************************************
00:05:23.243  END TEST app_cmdline
00:05:23.243  ************************************
00:05:23.243   03:55:51  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh
00:05:23.243   03:55:51  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:23.243   03:55:51  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:23.243   03:55:51  -- common/autotest_common.sh@10 -- # set +x
00:05:23.243  ************************************
00:05:23.243  START TEST version
00:05:23.243  ************************************
00:05:23.243   03:55:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh
00:05:23.243  * Looking for test storage...
00:05:23.243  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app
00:05:23.243    03:55:51 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:23.243     03:55:51 version -- common/autotest_common.sh@1711 -- # lcov --version
00:05:23.243     03:55:51 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:23.243    03:55:51 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:23.243    03:55:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:23.243    03:55:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:23.243    03:55:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:23.243    03:55:51 version -- scripts/common.sh@336 -- # IFS=.-:
00:05:23.243    03:55:51 version -- scripts/common.sh@336 -- # read -ra ver1
00:05:23.243    03:55:51 version -- scripts/common.sh@337 -- # IFS=.-:
00:05:23.243    03:55:51 version -- scripts/common.sh@337 -- # read -ra ver2
00:05:23.243    03:55:51 version -- scripts/common.sh@338 -- # local 'op=<'
00:05:23.243    03:55:51 version -- scripts/common.sh@340 -- # ver1_l=2
00:05:23.243    03:55:51 version -- scripts/common.sh@341 -- # ver2_l=1
00:05:23.243    03:55:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:23.243    03:55:51 version -- scripts/common.sh@344 -- # case "$op" in
00:05:23.243    03:55:51 version -- scripts/common.sh@345 -- # : 1
00:05:23.243    03:55:51 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:23.243    03:55:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:23.243     03:55:51 version -- scripts/common.sh@365 -- # decimal 1
00:05:23.243     03:55:51 version -- scripts/common.sh@353 -- # local d=1
00:05:23.243     03:55:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:23.243     03:55:51 version -- scripts/common.sh@355 -- # echo 1
00:05:23.243    03:55:51 version -- scripts/common.sh@365 -- # ver1[v]=1
00:05:23.243     03:55:51 version -- scripts/common.sh@366 -- # decimal 2
00:05:23.243     03:55:51 version -- scripts/common.sh@353 -- # local d=2
00:05:23.243     03:55:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:23.243     03:55:51 version -- scripts/common.sh@355 -- # echo 2
00:05:23.243    03:55:51 version -- scripts/common.sh@366 -- # ver2[v]=2
00:05:23.243    03:55:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:23.243    03:55:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:23.243    03:55:51 version -- scripts/common.sh@368 -- # return 0
00:05:23.243    03:55:51 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:23.243    03:55:51 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:23.243  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.243  		--rc genhtml_branch_coverage=1
00:05:23.243  		--rc genhtml_function_coverage=1
00:05:23.243  		--rc genhtml_legend=1
00:05:23.243  		--rc geninfo_all_blocks=1
00:05:23.243  		--rc geninfo_unexecuted_blocks=1
00:05:23.243  		
00:05:23.243  		'
00:05:23.243    03:55:51 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:23.243  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.243  		--rc genhtml_branch_coverage=1
00:05:23.243  		--rc genhtml_function_coverage=1
00:05:23.243  		--rc genhtml_legend=1
00:05:23.243  		--rc geninfo_all_blocks=1
00:05:23.243  		--rc geninfo_unexecuted_blocks=1
00:05:23.243  		
00:05:23.243  		'
00:05:23.243    03:55:51 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:23.243  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.243  		--rc genhtml_branch_coverage=1
00:05:23.243  		--rc genhtml_function_coverage=1
00:05:23.243  		--rc genhtml_legend=1
00:05:23.243  		--rc geninfo_all_blocks=1
00:05:23.243  		--rc geninfo_unexecuted_blocks=1
00:05:23.243  		
00:05:23.243  		'
00:05:23.243    03:55:51 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:23.243  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.243  		--rc genhtml_branch_coverage=1
00:05:23.243  		--rc genhtml_function_coverage=1
00:05:23.243  		--rc genhtml_legend=1
00:05:23.243  		--rc geninfo_all_blocks=1
00:05:23.243  		--rc geninfo_unexecuted_blocks=1
00:05:23.243  		
00:05:23.243  		'
00:05:23.243    03:55:51 version -- app/version.sh@17 -- # get_header_version major
00:05:23.243    03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:05:23.243    03:55:51 version -- app/version.sh@14 -- # cut -f2
00:05:23.243    03:55:51 version -- app/version.sh@14 -- # tr -d '"'
00:05:23.502   03:55:51 version -- app/version.sh@17 -- # major=25
00:05:23.502    03:55:51 version -- app/version.sh@18 -- # get_header_version minor
00:05:23.502    03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:05:23.502    03:55:51 version -- app/version.sh@14 -- # cut -f2
00:05:23.502    03:55:51 version -- app/version.sh@14 -- # tr -d '"'
00:05:23.502   03:55:51 version -- app/version.sh@18 -- # minor=1
00:05:23.502    03:55:51 version -- app/version.sh@19 -- # get_header_version patch
00:05:23.502    03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:05:23.502    03:55:51 version -- app/version.sh@14 -- # cut -f2
00:05:23.502    03:55:51 version -- app/version.sh@14 -- # tr -d '"'
00:05:23.502   03:55:51 version -- app/version.sh@19 -- # patch=0
00:05:23.502    03:55:51 version -- app/version.sh@20 -- # get_header_version suffix
00:05:23.502    03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:05:23.502    03:55:51 version -- app/version.sh@14 -- # cut -f2
00:05:23.502    03:55:51 version -- app/version.sh@14 -- # tr -d '"'
00:05:23.502   03:55:51 version -- app/version.sh@20 -- # suffix=-pre
00:05:23.502   03:55:51 version -- app/version.sh@22 -- # version=25.1
00:05:23.502   03:55:51 version -- app/version.sh@25 -- # (( patch != 0 ))
00:05:23.502   03:55:51 version -- app/version.sh@28 -- # version=25.1rc0
00:05:23.502   03:55:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python
00:05:23.502    03:55:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:05:23.502   03:55:51 version -- app/version.sh@30 -- # py_version=25.1rc0
00:05:23.502   03:55:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:05:23.502  
00:05:23.502  real	0m0.198s
00:05:23.502  user	0m0.137s
00:05:23.502  sys	0m0.086s
00:05:23.502   03:55:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:23.502   03:55:51 version -- common/autotest_common.sh@10 -- # set +x
00:05:23.502  ************************************
00:05:23.502  END TEST version
00:05:23.502  ************************************
00:05:23.502   03:55:51  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:05:23.502    03:55:51  -- spdk/autotest.sh@194 -- # uname -s
00:05:23.502   03:55:51  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:05:23.502   03:55:51  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:23.502   03:55:51  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:23.502   03:55:51  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@260 -- # timing_exit lib
00:05:23.502   03:55:51  -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:23.502   03:55:51  -- common/autotest_common.sh@10 -- # set +x
00:05:23.502   03:55:51  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@277 -- # export NET_TYPE
00:05:23.502   03:55:51  -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']'
00:05:23.502   03:55:51  -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp
00:05:23.502   03:55:51  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:23.502   03:55:51  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:23.502   03:55:51  -- common/autotest_common.sh@10 -- # set +x
00:05:23.502  ************************************
00:05:23.502  START TEST nvmf_tcp
00:05:23.502  ************************************
00:05:23.502   03:55:51 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp
00:05:23.502  * Looking for test storage...
00:05:23.502  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:05:23.502    03:55:52 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:23.502     03:55:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:05:23.502     03:55:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:23.762    03:55:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:23.762     03:55:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:23.762    03:55:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:05:23.762    03:55:52 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:23.762    03:55:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:23.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.762  		--rc genhtml_branch_coverage=1
00:05:23.762  		--rc genhtml_function_coverage=1
00:05:23.762  		--rc genhtml_legend=1
00:05:23.762  		--rc geninfo_all_blocks=1
00:05:23.762  		--rc geninfo_unexecuted_blocks=1
00:05:23.762  		
00:05:23.762  		'
00:05:23.762    03:55:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:23.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.762  		--rc genhtml_branch_coverage=1
00:05:23.762  		--rc genhtml_function_coverage=1
00:05:23.762  		--rc genhtml_legend=1
00:05:23.762  		--rc geninfo_all_blocks=1
00:05:23.762  		--rc geninfo_unexecuted_blocks=1
00:05:23.762  		
00:05:23.762  		'
00:05:23.762    03:55:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:23.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.762  		--rc genhtml_branch_coverage=1
00:05:23.762  		--rc genhtml_function_coverage=1
00:05:23.762  		--rc genhtml_legend=1
00:05:23.762  		--rc geninfo_all_blocks=1
00:05:23.762  		--rc geninfo_unexecuted_blocks=1
00:05:23.762  		
00:05:23.762  		'
00:05:23.762    03:55:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:23.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.762  		--rc genhtml_branch_coverage=1
00:05:23.762  		--rc genhtml_function_coverage=1
00:05:23.762  		--rc genhtml_legend=1
00:05:23.762  		--rc geninfo_all_blocks=1
00:05:23.762  		--rc geninfo_unexecuted_blocks=1
00:05:23.762  		
00:05:23.762  		'
00:05:23.762    03:55:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s
00:05:23.762   03:55:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']'
00:05:23.762   03:55:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp
00:05:23.762   03:55:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:23.762   03:55:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:23.762   03:55:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:23.762  ************************************
00:05:23.762  START TEST nvmf_target_core
00:05:23.762  ************************************
00:05:23.762   03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp
00:05:23.762  * Looking for test storage...
00:05:23.762  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:23.762     03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version
00:05:23.762     03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-:
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-:
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<'
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:23.762     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1
00:05:23.762     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1
00:05:23.762     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:23.762     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1
00:05:23.762    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:23.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.763  		--rc genhtml_branch_coverage=1
00:05:23.763  		--rc genhtml_function_coverage=1
00:05:23.763  		--rc genhtml_legend=1
00:05:23.763  		--rc geninfo_all_blocks=1
00:05:23.763  		--rc geninfo_unexecuted_blocks=1
00:05:23.763  		
00:05:23.763  		'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:23.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.763  		--rc genhtml_branch_coverage=1
00:05:23.763  		--rc genhtml_function_coverage=1
00:05:23.763  		--rc genhtml_legend=1
00:05:23.763  		--rc geninfo_all_blocks=1
00:05:23.763  		--rc geninfo_unexecuted_blocks=1
00:05:23.763  		
00:05:23.763  		'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:23.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.763  		--rc genhtml_branch_coverage=1
00:05:23.763  		--rc genhtml_function_coverage=1
00:05:23.763  		--rc genhtml_legend=1
00:05:23.763  		--rc geninfo_all_blocks=1
00:05:23.763  		--rc geninfo_unexecuted_blocks=1
00:05:23.763  		
00:05:23.763  		'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:23.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:23.763  		--rc genhtml_branch_coverage=1
00:05:23.763  		--rc genhtml_function_coverage=1
00:05:23.763  		--rc genhtml_legend=1
00:05:23.763  		--rc geninfo_all_blocks=1
00:05:23.763  		--rc geninfo_unexecuted_blocks=1
00:05:23.763  		
00:05:23.763  		'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s
00:05:23.763   03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']'
00:05:23.763   03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:23.763     03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:23.763      03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:23.763      03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:23.763      03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:23.763      03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH
00:05:23.763      03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:23.763  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:23.763    03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:23.763   03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:05:23.763   03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@")
00:05:23.763   03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]]
00:05:23.764   03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp
00:05:23.764   03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:23.764   03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:23.764   03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:05:23.764  ************************************
00:05:23.764  START TEST nvmf_abort
00:05:23.764  ************************************
00:05:23.764   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp
00:05:24.023  * Looking for test storage...
00:05:24.023  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-:
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-:
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<'
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:24.023  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:24.023  		--rc genhtml_branch_coverage=1
00:05:24.023  		--rc genhtml_function_coverage=1
00:05:24.023  		--rc genhtml_legend=1
00:05:24.023  		--rc geninfo_all_blocks=1
00:05:24.023  		--rc geninfo_unexecuted_blocks=1
00:05:24.023  		
00:05:24.023  		'
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:24.023  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:24.023  		--rc genhtml_branch_coverage=1
00:05:24.023  		--rc genhtml_function_coverage=1
00:05:24.023  		--rc genhtml_legend=1
00:05:24.023  		--rc geninfo_all_blocks=1
00:05:24.023  		--rc geninfo_unexecuted_blocks=1
00:05:24.023  		
00:05:24.023  		'
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:24.023  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:24.023  		--rc genhtml_branch_coverage=1
00:05:24.023  		--rc genhtml_function_coverage=1
00:05:24.023  		--rc genhtml_legend=1
00:05:24.023  		--rc geninfo_all_blocks=1
00:05:24.023  		--rc geninfo_unexecuted_blocks=1
00:05:24.023  		
00:05:24.023  		'
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:24.023  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:24.023  		--rc genhtml_branch_coverage=1
00:05:24.023  		--rc genhtml_function_coverage=1
00:05:24.023  		--rc genhtml_legend=1
00:05:24.023  		--rc geninfo_all_blocks=1
00:05:24.023  		--rc geninfo_unexecuted_blocks=1
00:05:24.023  		
00:05:24.023  		'
00:05:24.023   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:24.023     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:24.023    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:05:24.024     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob
00:05:24.024     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:24.024     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:24.024     03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:24.024      03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:24.024      03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:24.024      03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:24.024      03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH
00:05:24.024      03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:24.024  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:05:24.024    03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable
00:05:24.024   03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=()
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=()
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=()
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=()
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=()
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=()
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=()
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:05:26.565  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:05:26.565   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:05:26.566  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:05:26.566  Found net devices under 0000:0a:00.0: cvl_0_0
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:05:26.566  Found net devices under 0000:0a:00.1: cvl_0_1
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:05:26.566  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:05:26.566  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms
00:05:26.566  
00:05:26.566  --- 10.0.0.2 ping statistics ---
00:05:26.566  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:05:26.566  rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:05:26.566  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:05:26.566  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms
00:05:26.566  
00:05:26.566  --- 10.0.0.1 ping statistics ---
00:05:26.566  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:05:26.566  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=116354
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 116354
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 116354 ']'
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:26.566  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:26.566   03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.566  [2024-12-09 03:55:54.840449] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:26.566  [2024-12-09 03:55:54.840527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:05:26.566  [2024-12-09 03:55:54.918125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:26.566  [2024-12-09 03:55:54.979003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:05:26.566  [2024-12-09 03:55:54.979088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:05:26.566  [2024-12-09 03:55:54.979103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:05:26.566  [2024-12-09 03:55:54.979114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:05:26.566  [2024-12-09 03:55:54.979124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:05:26.566  [2024-12-09 03:55:54.980694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:26.566  [2024-12-09 03:55:54.980761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:26.566  [2024-12-09 03:55:54.980765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:26.566   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:26.566   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0
00:05:26.566   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:05:26.566   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:26.566   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.566   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:05:26.567   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256
00:05:26.567   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:26.567   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.567  [2024-12-09 03:55:55.133781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:05:26.567   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:26.567   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:05:26.567   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:26.567   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.824  Malloc0
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.824  Delay0
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.824  [2024-12-09 03:55:55.211573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:26.824   03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:05:26.824  [2024-12-09 03:55:55.367412] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:05:29.353  Initializing NVMe Controllers
00:05:29.353  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:05:29.353  controller IO queue size 128 less than required
00:05:29.353  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:05:29.353  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:05:29.353  Initialization complete. Launching workers.
00:05:29.353  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29233
00:05:29.353  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29294, failed to submit 62
00:05:29.353  	 success 29237, unsuccessful 57, failed 0
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20}
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:05:29.353  rmmod nvme_tcp
00:05:29.353  rmmod nvme_fabrics
00:05:29.353  rmmod nvme_keyring
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 116354 ']'
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 116354
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 116354 ']'
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 116354
00:05:29.353    03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:29.353    03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116354
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116354'
00:05:29.353  killing process with pid 116354
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 116354
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 116354
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:05:29.353   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr
00:05:29.354   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save
00:05:29.354   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:05:29.354   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore
00:05:29.354   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:05:29.354   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns
00:05:29.354   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:05:29.354   03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:05:29.354    03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:05:31.889  
00:05:31.889  real	0m7.567s
00:05:31.889  user	0m11.034s
00:05:31.889  sys	0m2.536s
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:05:31.889  ************************************
00:05:31.889  END TEST nvmf_abort
00:05:31.889  ************************************
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:05:31.889  ************************************
00:05:31.889  START TEST nvmf_ns_hotplug_stress
00:05:31.889  ************************************
00:05:31.889   03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:05:31.889  * Looking for test storage...
00:05:31.889  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:05:31.889    03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:31.889     03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:05:31.890     03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-:
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-:
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:31.890  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:31.890  		--rc genhtml_branch_coverage=1
00:05:31.890  		--rc genhtml_function_coverage=1
00:05:31.890  		--rc genhtml_legend=1
00:05:31.890  		--rc geninfo_all_blocks=1
00:05:31.890  		--rc geninfo_unexecuted_blocks=1
00:05:31.890  		
00:05:31.890  		'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:31.890  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:31.890  		--rc genhtml_branch_coverage=1
00:05:31.890  		--rc genhtml_function_coverage=1
00:05:31.890  		--rc genhtml_legend=1
00:05:31.890  		--rc geninfo_all_blocks=1
00:05:31.890  		--rc geninfo_unexecuted_blocks=1
00:05:31.890  		
00:05:31.890  		'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:31.890  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:31.890  		--rc genhtml_branch_coverage=1
00:05:31.890  		--rc genhtml_function_coverage=1
00:05:31.890  		--rc genhtml_legend=1
00:05:31.890  		--rc geninfo_all_blocks=1
00:05:31.890  		--rc geninfo_unexecuted_blocks=1
00:05:31.890  		
00:05:31.890  		'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:31.890  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:31.890  		--rc genhtml_branch_coverage=1
00:05:31.890  		--rc genhtml_function_coverage=1
00:05:31.890  		--rc genhtml_legend=1
00:05:31.890  		--rc geninfo_all_blocks=1
00:05:31.890  		--rc geninfo_unexecuted_blocks=1
00:05:31.890  		
00:05:31.890  		'
00:05:31.890   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:31.890     03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:31.890      03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:31.890      03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:31.890      03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:31.890      03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH
00:05:31.890      03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:31.890  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:31.890    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:05:31.891    03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:05:31.891   03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=()
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=()
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=()
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=()
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:05:33.797  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:05:33.797  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:05:33.797  Found net devices under 0000:0a:00.0: cvl_0_0
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:05:33.797  Found net devices under 0000:0a:00.1: cvl_0_1
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:05:33.797   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:05:34.056  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:05:34.056  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms
00:05:34.056  
00:05:34.056  --- 10.0.0.2 ping statistics ---
00:05:34.056  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:05:34.056  rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:05:34.056  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:05:34.056  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms
00:05:34.056  
00:05:34.056  --- 10.0.0.1 ping statistics ---
00:05:34.056  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:05:34.056  rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=118720
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 118720
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 118720 ']'
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:34.056  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:34.056   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:05:34.056  [2024-12-09 03:56:02.479678] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:05:34.056  [2024-12-09 03:56:02.479771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:05:34.056  [2024-12-09 03:56:02.549411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:34.056  [2024-12-09 03:56:02.602577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:05:34.056  [2024-12-09 03:56:02.602635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:05:34.056  [2024-12-09 03:56:02.602658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:05:34.056  [2024-12-09 03:56:02.602668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:05:34.056  [2024-12-09 03:56:02.602678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:05:34.056  [2024-12-09 03:56:02.604143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:34.056  [2024-12-09 03:56:02.604251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:34.056  [2024-12-09 03:56:02.604254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:05:34.314   03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:05:34.572  [2024-12-09 03:56:02.997570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:05:34.572   03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:05:34.830   03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:05:35.087  [2024-12-09 03:56:03.532403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:05:35.087   03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:05:35.345   03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:05:35.603  Malloc0
00:05:35.603   03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:05:35.862  Delay0
00:05:35.862   03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:36.120   03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:05:36.378  NULL1
00:05:36.378   03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:05:36.943   03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=119139
00:05:36.943   03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:05:36.943   03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:36.943   03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:36.943   03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:37.199   03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:05:37.199   03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:05:37.456  true
00:05:37.456   03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:37.456   03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:38.021   03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:38.021   03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:05:38.021   03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:05:38.278  true
00:05:38.278   03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:38.278   03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:38.534   03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:38.790   03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:05:38.790   03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:05:39.047  true
00:05:39.304   03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:39.304   03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:40.258  Read completed with error (sct=0, sc=11)
00:05:40.258   03:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:40.515   03:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:05:40.515   03:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:05:40.773  true
00:05:40.773   03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:40.773   03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:41.029   03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:41.285   03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:05:41.285   03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:05:41.542  true
00:05:41.542   03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:41.542   03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:41.799   03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:42.057   03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:05:42.057   03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:05:42.315  true
00:05:42.315   03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:42.315   03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:43.250   03:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:43.250  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:43.508   03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:05:43.508   03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:05:43.765  true
00:05:43.765   03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:43.766   03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:44.023   03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:44.281   03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:05:44.281   03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:05:44.540  true
00:05:44.540   03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:44.540   03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:44.799   03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:45.057   03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:05:45.057   03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:05:45.315  true
00:05:45.315   03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:45.315   03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:46.708  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:46.708   03:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:46.708  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:46.708  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:46.966   03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:05:46.966   03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:05:46.966  true
00:05:47.223   03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:47.223   03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:47.481   03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:47.739   03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:05:47.739   03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:05:47.997  true
00:05:47.997   03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:47.997   03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:48.931  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:48.931   03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:48.931  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:48.931  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:48.931  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:05:48.931   03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:05:48.931   03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:05:49.188  true
00:05:49.188   03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:49.188   03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:49.445   03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:49.702   03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:05:49.702   03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:05:50.267  true
00:05:50.267   03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:50.267   03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:51.199   03:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:51.199   03:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:05:51.199   03:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:05:51.457  true
00:05:51.457   03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:51.457   03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:51.714   03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:51.971   03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:05:51.971   03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:05:52.536  true
00:05:52.536   03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:52.536   03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:52.536   03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:52.793   03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:05:52.793   03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:05:53.051  true
00:05:53.051   03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:53.051   03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:54.426   03:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:54.426   03:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:05:54.426   03:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:05:54.685  true
00:05:54.685   03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:54.685   03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:54.944   03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:55.202   03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:05:55.202   03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:05:55.460  true
00:05:55.460   03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:55.460   03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:55.719   03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:55.977   03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:05:55.977   03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:05:56.235  true
00:05:56.235   03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:56.235   03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:57.609   03:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:57.609   03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:05:57.609   03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:05:57.867  true
00:05:57.867   03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:57.867   03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:58.123   03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:58.380   03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:05:58.380   03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:05:58.637  true
00:05:58.637   03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:58.637   03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:05:58.894   03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:05:59.151   03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:05:59.152   03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:05:59.409  true
00:05:59.409   03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:05:59.409   03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:00.341   03:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:00.598   03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:06:00.598   03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:06:00.856  true
00:06:00.856   03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:00.856   03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:01.114   03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:01.371   03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:06:01.371   03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:06:01.637  true
00:06:01.895   03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:01.895   03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:02.152   03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:02.409   03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:06:02.409   03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:06:02.667  true
00:06:02.667   03:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:02.667   03:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:03.599  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:06:03.600   03:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:03.858   03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:06:03.858   03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:06:04.116  true
00:06:04.116   03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:04.116   03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:04.374   03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:04.632   03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:06:04.632   03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:06:04.890  true
00:06:04.890   03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:04.890   03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:05.825  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:06:05.825   03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:05.825  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:06:06.083   03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:06:06.083   03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:06:06.342  true
00:06:06.342   03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:06.342   03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:06.600   03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:06.858   03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:06:06.858   03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:06:07.115  true
00:06:07.115   03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:07.115   03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:07.115  Initializing NVMe Controllers
00:06:07.115  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:06:07.115  Controller IO queue size 128, less than required.
00:06:07.115  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:06:07.115  Controller IO queue size 128, less than required.
00:06:07.115  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:06:07.115  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:06:07.115  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:06:07.115  Initialization complete. Launching workers.
00:06:07.116  ========================================================
00:06:07.116                                                                                                               Latency(us)
00:06:07.116  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:06:07.116  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     433.29       0.21  109458.96    3131.70 1012564.34
00:06:07.116  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    7844.77       3.83   16268.03    2950.24  360041.59
00:06:07.116  ========================================================
00:06:07.116  Total                                                                    :    8278.06       4.04   21145.86    2950.24 1012564.34
00:06:07.116  
00:06:07.373   03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:07.631   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030
00:06:07.631   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030
00:06:07.889  true
00:06:07.889   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139
00:06:07.889  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (119139) - No such process
00:06:07.889   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 119139
00:06:07.889   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:08.147   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:08.405   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:06:08.405   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=()
00:06:08.405   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:06:08.405   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:08.405   03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:06:08.663  null0
00:06:08.663   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:08.663   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:08.663   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:06:08.921  null1
00:06:08.921   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:08.921   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:08.921   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:06:09.178  null2
00:06:09.178   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:09.178   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:09.178   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:06:09.436  null3
00:06:09.436   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:09.436   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:09.436   03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:06:09.693  null4
00:06:09.693   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:09.693   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:09.693   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:06:10.260  null5
00:06:10.260   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:10.260   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:10.260   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:06:10.260  null6
00:06:10.517   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:10.518   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:10.518   03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:06:10.776  null7
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.776   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 123218 123219 123221 123223 123225 123227 123229 123231
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:10.777   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:11.035   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:11.035   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:11.036   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:11.036   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:11.036   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:11.036   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:11.036   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:11.036   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.295   03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:11.553   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:11.811   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:12.069   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:12.636   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:12.637   03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:12.896   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:13.154   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.155   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:13.413   03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:13.672   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:13.931   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.190   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.191   03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:14.449   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:14.707   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:14.707   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:14.707   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:14.707   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:14.707   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:14.707   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:14.707   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:14.966   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:15.225   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:15.484   03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:15.743   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:16.001   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.002   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:06:16.568   03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:06:16.826  rmmod nvme_tcp
00:06:16.826  rmmod nvme_fabrics
00:06:16.826  rmmod nvme_keyring
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 118720 ']'
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 118720
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 118720 ']'
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 118720
00:06:16.826    03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:16.826    03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118720
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118720'
00:06:16.826  killing process with pid 118720
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 118720
00:06:16.826   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 118720
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:17.084   03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:06:17.084    03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:18.994   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:06:18.994  
00:06:18.994  real	0m47.648s
00:06:18.994  user	3m42.151s
00:06:18.994  sys	0m15.789s
00:06:18.994   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:18.994   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:06:18.994  ************************************
00:06:18.994  END TEST nvmf_ns_hotplug_stress
00:06:18.994  ************************************
00:06:19.254   03:56:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:06:19.254   03:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:19.254   03:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:19.254   03:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:06:19.254  ************************************
00:06:19.254  START TEST nvmf_delete_subsystem
00:06:19.254  ************************************
00:06:19.254   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:06:19.254  * Looking for test storage...
00:06:19.254  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:19.254     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version
00:06:19.254     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-:
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-:
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<'
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:19.254    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:19.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:19.255  		--rc genhtml_branch_coverage=1
00:06:19.255  		--rc genhtml_function_coverage=1
00:06:19.255  		--rc genhtml_legend=1
00:06:19.255  		--rc geninfo_all_blocks=1
00:06:19.255  		--rc geninfo_unexecuted_blocks=1
00:06:19.255  		
00:06:19.255  		'
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:19.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:19.255  		--rc genhtml_branch_coverage=1
00:06:19.255  		--rc genhtml_function_coverage=1
00:06:19.255  		--rc genhtml_legend=1
00:06:19.255  		--rc geninfo_all_blocks=1
00:06:19.255  		--rc geninfo_unexecuted_blocks=1
00:06:19.255  		
00:06:19.255  		'
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:19.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:19.255  		--rc genhtml_branch_coverage=1
00:06:19.255  		--rc genhtml_function_coverage=1
00:06:19.255  		--rc genhtml_legend=1
00:06:19.255  		--rc geninfo_all_blocks=1
00:06:19.255  		--rc geninfo_unexecuted_blocks=1
00:06:19.255  		
00:06:19.255  		'
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:19.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:19.255  		--rc genhtml_branch_coverage=1
00:06:19.255  		--rc genhtml_function_coverage=1
00:06:19.255  		--rc genhtml_legend=1
00:06:19.255  		--rc geninfo_all_blocks=1
00:06:19.255  		--rc geninfo_unexecuted_blocks=1
00:06:19.255  		
00:06:19.255  		'
00:06:19.255   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:19.255    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:19.255     03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:19.255      03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:19.255      03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:19.255      03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:19.255      03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH
00:06:19.256      03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:19.256  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:06:19.256    03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable
00:06:19.256   03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=()
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=()
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=()
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=()
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=()
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:06:21.790  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:06:21.790  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:06:21.790  Found net devices under 0000:0a:00.0: cvl_0_0
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:06:21.790  Found net devices under 0000:0a:00.1: cvl_0_1
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:06:21.790   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:06:21.791   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:06:21.791   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:06:21.791   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:06:21.791   03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:06:21.791  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:06:21.791  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms
00:06:21.791  
00:06:21.791  --- 10.0.0.2 ping statistics ---
00:06:21.791  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:21.791  rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:06:21.791  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:06:21.791  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms
00:06:21.791  
00:06:21.791  --- 10.0.0.1 ping statistics ---
00:06:21.791  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:21.791  rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=126122
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 126122
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 126122 ']'
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:21.791  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:21.791   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:21.791  [2024-12-09 03:56:50.253178] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:06:21.791  [2024-12-09 03:56:50.253278] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:06:21.791  [2024-12-09 03:56:50.328524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:22.049  [2024-12-09 03:56:50.386572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:06:22.049  [2024-12-09 03:56:50.386638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:06:22.049  [2024-12-09 03:56:50.386661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:22.049  [2024-12-09 03:56:50.386672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:22.049  [2024-12-09 03:56:50.386682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:06:22.049  [2024-12-09 03:56:50.388101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:22.049  [2024-12-09 03:56:50.388106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:22.049  [2024-12-09 03:56:50.539434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:22.049  [2024-12-09 03:56:50.555701] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:22.049  NULL1
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:22.049  Delay0
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=126149
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2
00:06:22.049   03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:06:22.306  [2024-12-09 03:56:50.640480] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:06:24.203   03:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:06:24.203   03:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:24.203   03:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:24.203  Read completed with error (sct=0, sc=8)
00:06:24.203  Read completed with error (sct=0, sc=8)
00:06:24.203  starting I/O failed: -6
00:06:24.203  Write completed with error (sct=0, sc=8)
00:06:24.203  Read completed with error (sct=0, sc=8)
00:06:24.203  Read completed with error (sct=0, sc=8)
00:06:24.203  Read completed with error (sct=0, sc=8)
00:06:24.203  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  [2024-12-09 03:56:52.763669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f4a0 is same with the state(6) to be set
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  starting I/O failed: -6
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.204  Write completed with error (sct=0, sc=8)
00:06:24.204  Read completed with error (sct=0, sc=8)
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Write completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  Read completed with error (sct=0, sc=8)
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:24.205  starting I/O failed: -6
00:06:25.577  [2024-12-09 03:56:53.735876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(6) to be set
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  [2024-12-09 03:56:53.764598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f2c0 is same with the state(6) to be set
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Write completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.577  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  [2024-12-09 03:56:53.764887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99c000d7e0 is same with the state(6) to be set
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  [2024-12-09 03:56:53.765960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f680 is same with the state(6) to be set
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Write completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  Read completed with error (sct=0, sc=8)
00:06:25.578  [2024-12-09 03:56:53.766203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99c000d020 is same with the state(6) to be set
00:06:25.578  Initializing NVMe Controllers
00:06:25.578  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:06:25.578  Controller IO queue size 128, less than required.
00:06:25.578  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:06:25.578  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:06:25.578  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:06:25.578  Initialization complete. Launching workers.
00:06:25.578  ========================================================
00:06:25.578                                                                                                               Latency(us)
00:06:25.578  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:06:25.578  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     171.66       0.08  893018.98     842.21 1013070.99
00:06:25.578  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     182.58       0.09  918423.41     688.66 1013431.40
00:06:25.578  ========================================================
00:06:25.578  Total                                                                    :     354.24       0.17  906112.58     688.66 1013431.40
00:06:25.578  
00:06:25.578  [2024-12-09 03:56:53.767378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor
00:06:25.578  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:06:25.578   03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:25.578   03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0
00:06:25.578   03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 126149
00:06:25.578   03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 126149
00:06:25.836  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (126149) - No such process
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 126149
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 126149
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:25.836    03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 126149
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:25.836  [2024-12-09 03:56:54.290655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=126672
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:25.836   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:06:25.836  [2024-12-09 03:56:54.362780] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:06:26.402   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:06:26.402   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:26.402   03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:06:26.980   03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:06:26.980   03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:26.980   03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:06:27.237   03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:06:27.237   03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:27.238   03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:06:27.802   03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:06:27.802   03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:27.802   03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:06:28.366   03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:06:28.366   03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:28.366   03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:06:28.930   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:06:28.930   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:28.930   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:06:29.188  Initializing NVMe Controllers
00:06:29.188  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:06:29.188  Controller IO queue size 128, less than required.
00:06:29.188  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:06:29.188  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:06:29.188  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:06:29.188  Initialization complete. Launching workers.
00:06:29.188  ========================================================
00:06:29.188                                                                                                               Latency(us)
00:06:29.188  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:06:29.188  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1003487.61 1000207.82 1012338.90
00:06:29.188  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1004248.73 1000166.81 1041578.45
00:06:29.188  ========================================================
00:06:29.188  Total                                                                    :     256.00       0.12 1003868.17 1000166.81 1041578.45
00:06:29.188  
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672
00:06:29.445  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (126672) - No such process
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 126672
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:06:29.445  rmmod nvme_tcp
00:06:29.445  rmmod nvme_fabrics
00:06:29.445  rmmod nvme_keyring
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 126122 ']'
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 126122
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 126122 ']'
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 126122
00:06:29.445    03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:29.445    03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126122
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126122'
00:06:29.445  killing process with pid 126122
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 126122
00:06:29.445   03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 126122
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:29.705   03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:06:29.705    03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:31.634   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:06:31.634  
00:06:31.634  real	0m12.560s
00:06:31.634  user	0m27.932s
00:06:31.634  sys	0m3.020s
00:06:31.634   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:31.634   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:06:31.634  ************************************
00:06:31.634  END TEST nvmf_delete_subsystem
00:06:31.634  ************************************
00:06:31.634   03:57:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:06:31.634   03:57:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:31.634   03:57:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:31.634   03:57:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:06:31.892  ************************************
00:06:31.892  START TEST nvmf_host_management
00:06:31.892  ************************************
00:06:31.892   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:06:31.892  * Looking for test storage...
00:06:31.892  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-:
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-:
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<'
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:31.892  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.892  		--rc genhtml_branch_coverage=1
00:06:31.892  		--rc genhtml_function_coverage=1
00:06:31.892  		--rc genhtml_legend=1
00:06:31.892  		--rc geninfo_all_blocks=1
00:06:31.892  		--rc geninfo_unexecuted_blocks=1
00:06:31.892  		
00:06:31.892  		'
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:31.892  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.892  		--rc genhtml_branch_coverage=1
00:06:31.892  		--rc genhtml_function_coverage=1
00:06:31.892  		--rc genhtml_legend=1
00:06:31.892  		--rc geninfo_all_blocks=1
00:06:31.892  		--rc geninfo_unexecuted_blocks=1
00:06:31.892  		
00:06:31.892  		'
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:31.892  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.892  		--rc genhtml_branch_coverage=1
00:06:31.892  		--rc genhtml_function_coverage=1
00:06:31.892  		--rc genhtml_legend=1
00:06:31.892  		--rc geninfo_all_blocks=1
00:06:31.892  		--rc geninfo_unexecuted_blocks=1
00:06:31.892  		
00:06:31.892  		'
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:31.892  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.892  		--rc genhtml_branch_coverage=1
00:06:31.892  		--rc genhtml_function_coverage=1
00:06:31.892  		--rc genhtml_legend=1
00:06:31.892  		--rc geninfo_all_blocks=1
00:06:31.892  		--rc geninfo_unexecuted_blocks=1
00:06:31.892  		
00:06:31.892  		'
00:06:31.892   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:31.892     03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:31.892      03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:31.892      03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:31.892      03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:31.892      03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH
00:06:31.892      03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:31.892    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:31.893    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:31.893    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:31.893    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:31.893  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:31.893    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:31.893    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:31.893    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:06:31.893    03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable
00:06:31.893   03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=()
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=()
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=()
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=()
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=()
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=()
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=()
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:06:34.436  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:06:34.436  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:06:34.436  Found net devices under 0000:0a:00.0: cvl_0_0
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:06:34.436  Found net devices under 0000:0a:00.1: cvl_0_1
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:06:34.436   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:06:34.437  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:06:34.437  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms
00:06:34.437  
00:06:34.437  --- 10.0.0.2 ping statistics ---
00:06:34.437  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:34.437  rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:06:34.437  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:06:34.437  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms
00:06:34.437  
00:06:34.437  --- 10.0.0.1 ping statistics ---
00:06:34.437  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:34.437  rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=129140
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 129140
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 129140 ']'
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:34.437  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.437  [2024-12-09 03:57:02.656055] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:06:34.437  [2024-12-09 03:57:02.656139] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:06:34.437  [2024-12-09 03:57:02.729189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:34.437  [2024-12-09 03:57:02.785226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:06:34.437  [2024-12-09 03:57:02.785299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:06:34.437  [2024-12-09 03:57:02.785328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:34.437  [2024-12-09 03:57:02.785339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:34.437  [2024-12-09 03:57:02.785348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:06:34.437  [2024-12-09 03:57:02.786805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:34.437  [2024-12-09 03:57:02.786867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:06:34.437  [2024-12-09 03:57:02.786934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:06:34.437  [2024-12-09 03:57:02.786937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.437  [2024-12-09 03:57:02.923429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:34.437   03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.437  Malloc0
00:06:34.437  [2024-12-09 03:57:02.995381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:06:34.437   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:34.437   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems
00:06:34.437   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:34.437   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=129187
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 129187 /var/tmp/bdevperf.sock
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 129187 ']'
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:06:34.696    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:06:34.696  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:06:34.696    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:34.696    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:06:34.696   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.696    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:06:34.696    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:06:34.696  {
00:06:34.696    "params": {
00:06:34.696      "name": "Nvme$subsystem",
00:06:34.696      "trtype": "$TEST_TRANSPORT",
00:06:34.696      "traddr": "$NVMF_FIRST_TARGET_IP",
00:06:34.696      "adrfam": "ipv4",
00:06:34.696      "trsvcid": "$NVMF_PORT",
00:06:34.696      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:06:34.696      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:06:34.696      "hdgst": ${hdgst:-false},
00:06:34.696      "ddgst": ${ddgst:-false}
00:06:34.696    },
00:06:34.696    "method": "bdev_nvme_attach_controller"
00:06:34.696  }
00:06:34.696  EOF
00:06:34.696  )")
00:06:34.696     03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:06:34.696    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:06:34.696     03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:06:34.696     03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:06:34.696    "params": {
00:06:34.696      "name": "Nvme0",
00:06:34.696      "trtype": "tcp",
00:06:34.696      "traddr": "10.0.0.2",
00:06:34.696      "adrfam": "ipv4",
00:06:34.696      "trsvcid": "4420",
00:06:34.696      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:06:34.696      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:06:34.696      "hdgst": false,
00:06:34.696      "ddgst": false
00:06:34.696    },
00:06:34.696    "method": "bdev_nvme_attach_controller"
00:06:34.696  }'
00:06:34.696  [2024-12-09 03:57:03.080015] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:06:34.696  [2024-12-09 03:57:03.080107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129187 ]
00:06:34.696  [2024-12-09 03:57:03.150705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:34.696  [2024-12-09 03:57:03.210965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:34.955  Running I/O for 10 seconds...
00:06:34.955   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:34.955   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:06:34.955   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:06:34.955   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:34.955   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.955   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:34.955   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 ))
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:06:34.956    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:06:34.956    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:06:34.956    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:34.956    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:34.956    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']'
00:06:34.956   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25
00:06:35.214   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- ))
00:06:35.214   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:06:35.214    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:06:35.215    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:06:35.215    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:35.215    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:35.215    03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']'
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:35.475  [2024-12-09 03:57:03.802055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475  [2024-12-09 03:57:03.802463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:35.475   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:35.475  [2024-12-09 03:57:03.807241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:06:35.475  [2024-12-09 03:57:03.807291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.475  [2024-12-09 03:57:03.807311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:06:35.475  [2024-12-09 03:57:03.807334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.475  [2024-12-09 03:57:03.807349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:06:35.475  [2024-12-09 03:57:03.807362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:06:35.476  [2024-12-09 03:57:03.807389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0e660 is same with the state(6) to be set
00:06:35.476  [2024-12-09 03:57:03.807745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.807773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.807831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.807862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.807891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.807920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.807949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.807978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.807993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.476  [2024-12-09 03:57:03.808933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.476  [2024-12-09 03:57:03.808946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.808960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.808973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.808989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.809742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:06:35.477  [2024-12-09 03:57:03.809755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:06:35.477  [2024-12-09 03:57:03.810938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:06:35.477  task offset: 81920 on job bdev=Nvme0n1 fails
00:06:35.477  
00:06:35.477                                                                                                  Latency(us)
00:06:35.477  
[2024-12-09T02:57:04.053Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:35.477  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:06:35.477  Job: Nvme0n1 ended in about 0.41 seconds with error
00:06:35.477  	 Verification LBA range: start 0x0 length 0x400
00:06:35.477  	 Nvme0n1             :       0.41    1565.74      97.86     156.57     0.00   36109.85    2912.71   34564.17
00:06:35.477  
[2024-12-09T02:57:04.053Z]  ===================================================================================================================
00:06:35.477  
[2024-12-09T02:57:04.053Z]  Total                       :               1565.74      97.86     156.57     0.00   36109.85    2912.71   34564.17
00:06:35.477  [2024-12-09 03:57:03.812840] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:35.477  [2024-12-09 03:57:03.812868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0e660 (9): Bad file descriptor
00:06:35.477   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:35.477   03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1
00:06:35.477  [2024-12-09 03:57:03.860731] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:06:36.412   03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 129187
00:06:36.412  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (129187) - No such process
00:06:36.412   03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true
00:06:36.412   03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:06:36.412   03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:06:36.412    03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:06:36.412    03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:06:36.412    03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:06:36.412    03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:06:36.412    03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:06:36.412  {
00:06:36.412    "params": {
00:06:36.412      "name": "Nvme$subsystem",
00:06:36.412      "trtype": "$TEST_TRANSPORT",
00:06:36.412      "traddr": "$NVMF_FIRST_TARGET_IP",
00:06:36.412      "adrfam": "ipv4",
00:06:36.412      "trsvcid": "$NVMF_PORT",
00:06:36.412      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:06:36.412      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:06:36.412      "hdgst": ${hdgst:-false},
00:06:36.412      "ddgst": ${ddgst:-false}
00:06:36.412    },
00:06:36.412    "method": "bdev_nvme_attach_controller"
00:06:36.412  }
00:06:36.412  EOF
00:06:36.412  )")
00:06:36.412     03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:06:36.412    03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:06:36.412     03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:06:36.412     03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:06:36.412    "params": {
00:06:36.412      "name": "Nvme0",
00:06:36.412      "trtype": "tcp",
00:06:36.412      "traddr": "10.0.0.2",
00:06:36.412      "adrfam": "ipv4",
00:06:36.412      "trsvcid": "4420",
00:06:36.412      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:06:36.412      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:06:36.412      "hdgst": false,
00:06:36.412      "ddgst": false
00:06:36.412    },
00:06:36.412    "method": "bdev_nvme_attach_controller"
00:06:36.412  }'
00:06:36.412  [2024-12-09 03:57:04.867121] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:06:36.412  [2024-12-09 03:57:04.867189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129464 ]
00:06:36.412  [2024-12-09 03:57:04.936345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:36.671  [2024-12-09 03:57:04.996560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:36.929  Running I/O for 1 seconds...
00:06:37.864       1664.00 IOPS,   104.00 MiB/s
00:06:37.864                                                                                                  Latency(us)
00:06:37.864  
[2024-12-09T02:57:06.440Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:37.864  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:06:37.864  	 Verification LBA range: start 0x0 length 0x400
00:06:37.864  	 Nvme0n1             :       1.01    1703.09     106.44       0.00     0.00   36966.69    7233.23   33204.91
00:06:37.864  
[2024-12-09T02:57:06.440Z]  ===================================================================================================================
00:06:37.864  
[2024-12-09T02:57:06.440Z]  Total                       :               1703.09     106.44       0.00     0.00   36966.69    7233.23   33204.91
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20}
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:06:38.123  rmmod nvme_tcp
00:06:38.123  rmmod nvme_fabrics
00:06:38.123  rmmod nvme_keyring
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 129140 ']'
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 129140
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 129140 ']'
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 129140
00:06:38.123    03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:38.123    03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129140
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129140'
00:06:38.123  killing process with pid 129140
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 129140
00:06:38.123   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 129140
00:06:38.383  [2024-12-09 03:57:06.917743] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:38.383   03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:06:38.383    03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:40.916   03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:06:40.916   03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT
00:06:40.916  
00:06:40.916  real	0m8.771s
00:06:40.916  user	0m19.761s
00:06:40.916  sys	0m2.693s
00:06:40.916   03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:40.916   03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:06:40.916  ************************************
00:06:40.916  END TEST nvmf_host_management
00:06:40.916  ************************************
00:06:40.916   03:57:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:06:40.916   03:57:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:40.916   03:57:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:40.916   03:57:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:06:40.916  ************************************
00:06:40.916  START TEST nvmf_lvol
00:06:40.916  ************************************
00:06:40.916   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:06:40.916  * Looking for test storage...
00:06:40.916  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:40.916     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version
00:06:40.916     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-:
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1
00:06:40.916    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-:
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:40.917  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.917  		--rc genhtml_branch_coverage=1
00:06:40.917  		--rc genhtml_function_coverage=1
00:06:40.917  		--rc genhtml_legend=1
00:06:40.917  		--rc geninfo_all_blocks=1
00:06:40.917  		--rc geninfo_unexecuted_blocks=1
00:06:40.917  		
00:06:40.917  		'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:40.917  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.917  		--rc genhtml_branch_coverage=1
00:06:40.917  		--rc genhtml_function_coverage=1
00:06:40.917  		--rc genhtml_legend=1
00:06:40.917  		--rc geninfo_all_blocks=1
00:06:40.917  		--rc geninfo_unexecuted_blocks=1
00:06:40.917  		
00:06:40.917  		'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:40.917  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.917  		--rc genhtml_branch_coverage=1
00:06:40.917  		--rc genhtml_function_coverage=1
00:06:40.917  		--rc genhtml_legend=1
00:06:40.917  		--rc geninfo_all_blocks=1
00:06:40.917  		--rc geninfo_unexecuted_blocks=1
00:06:40.917  		
00:06:40.917  		'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:40.917  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.917  		--rc genhtml_branch_coverage=1
00:06:40.917  		--rc genhtml_function_coverage=1
00:06:40.917  		--rc genhtml_legend=1
00:06:40.917  		--rc geninfo_all_blocks=1
00:06:40.917  		--rc geninfo_unexecuted_blocks=1
00:06:40.917  		
00:06:40.917  		'
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:40.917     03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:40.917      03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:40.917      03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:40.917      03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:40.917      03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH
00:06:40.917      03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:40.917  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:06:40.917    03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:06:40.917   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:06:40.918   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable
00:06:40.918   03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:06:42.823   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:06:42.823   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=()
00:06:42.823   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs
00:06:42.823   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=()
00:06:42.823   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:06:42.823   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=()
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=()
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=()
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=()
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=()
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:06:42.824  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:06:42.824  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:06:42.824  Found net devices under 0000:0a:00.0: cvl_0_0
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:06:42.824  Found net devices under 0000:0a:00.1: cvl_0_1
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:06:42.824   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:06:42.825  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:06:42.825  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms
00:06:42.825  
00:06:42.825  --- 10.0.0.2 ping statistics ---
00:06:42.825  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:42.825  rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:06:42.825  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:06:42.825  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms
00:06:42.825  
00:06:42.825  --- 10.0.0.1 ping statistics ---
00:06:42.825  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:42.825  rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:06:42.825   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=132188
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 132188
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 132188 ']'
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:43.083  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:43.083   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:06:43.083  [2024-12-09 03:57:11.481399] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:06:43.083  [2024-12-09 03:57:11.481473] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:06:43.083  [2024-12-09 03:57:11.548546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:43.083  [2024-12-09 03:57:11.603105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:06:43.083  [2024-12-09 03:57:11.603156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:06:43.083  [2024-12-09 03:57:11.603179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:43.083  [2024-12-09 03:57:11.603190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:43.083  [2024-12-09 03:57:11.603199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:06:43.083  [2024-12-09 03:57:11.604650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:43.083  [2024-12-09 03:57:11.604704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:43.083  [2024-12-09 03:57:11.604708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:43.341   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:43.341   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0
00:06:43.341   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:06:43.341   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:43.341   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:06:43.341   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:06:43.341   03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:06:43.599  [2024-12-09 03:57:11.987668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:43.599    03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:06:43.857   03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:06:43.857    03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:06:44.115   03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:06:44.115   03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:06:44.373    03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:06:44.632   03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4d6b20f9-7b12-4320-bc2f-c3586e6ecf71
00:06:44.632    03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d6b20f9-7b12-4320-bc2f-c3586e6ecf71 lvol 20
00:06:44.890   03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=12a87274-9a9a-48e6-87f8-5d638cc27fc0
00:06:44.890   03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:06:45.148   03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12a87274-9a9a-48e6-87f8-5d638cc27fc0
00:06:45.406   03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:06:45.664  [2024-12-09 03:57:14.217250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:06:45.664   03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:06:46.236   03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=132509
00:06:46.236   03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:06:46.236   03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1
00:06:47.171    03:57:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 12a87274-9a9a-48e6-87f8-5d638cc27fc0 MY_SNAPSHOT
00:06:47.429   03:57:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f91135f0-b5b3-4211-8884-552842fc46e0
00:06:47.429   03:57:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 12a87274-9a9a-48e6-87f8-5d638cc27fc0 30
00:06:47.687    03:57:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f91135f0-b5b3-4211-8884-552842fc46e0 MY_CLONE
00:06:47.945   03:57:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=632feaf9-dfbe-424f-921e-b9c4176bd00c
00:06:47.945   03:57:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 632feaf9-dfbe-424f-921e-b9c4176bd00c
00:06:48.878   03:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 132509
00:06:56.982  Initializing NVMe Controllers
00:06:56.982  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:06:56.982  Controller IO queue size 128, less than required.
00:06:56.982  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:06:56.982  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:06:56.982  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:06:56.982  Initialization complete. Launching workers.
00:06:56.982  ========================================================
00:06:56.982                                                                                                               Latency(us)
00:06:56.982  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:06:56.982  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:   10554.70      41.23   12136.19     488.87   86009.03
00:06:56.982  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:   10319.10      40.31   12409.19    2326.38   55050.34
00:06:56.982  ========================================================
00:06:56.982  Total                                                                    :   20873.80      81.54   12271.15     488.87   86009.03
00:06:56.982  
00:06:56.982   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:06:56.982   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12a87274-9a9a-48e6-87f8-5d638cc27fc0
00:06:57.240   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d6b20f9-7b12-4320-bc2f-c3586e6ecf71
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20}
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:06:57.498  rmmod nvme_tcp
00:06:57.498  rmmod nvme_fabrics
00:06:57.498  rmmod nvme_keyring
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 132188 ']'
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 132188
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 132188 ']'
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 132188
00:06:57.498    03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:57.498    03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132188
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132188'
00:06:57.498  killing process with pid 132188
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 132188
00:06:57.498   03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 132188
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:57.758   03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:06:57.758    03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:07:00.293  
00:07:00.293  real	0m19.251s
00:07:00.293  user	1m5.940s
00:07:00.293  sys	0m5.407s
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:07:00.293  ************************************
00:07:00.293  END TEST nvmf_lvol
00:07:00.293  ************************************
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:07:00.293  ************************************
00:07:00.293  START TEST nvmf_lvs_grow
00:07:00.293  ************************************
00:07:00.293   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:07:00.293  * Looking for test storage...
00:07:00.293  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-:
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-:
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<'
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:00.293     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:00.293  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.293  		--rc genhtml_branch_coverage=1
00:07:00.293  		--rc genhtml_function_coverage=1
00:07:00.293  		--rc genhtml_legend=1
00:07:00.293  		--rc geninfo_all_blocks=1
00:07:00.293  		--rc geninfo_unexecuted_blocks=1
00:07:00.293  		
00:07:00.293  		'
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:00.293  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.293  		--rc genhtml_branch_coverage=1
00:07:00.293  		--rc genhtml_function_coverage=1
00:07:00.293  		--rc genhtml_legend=1
00:07:00.293  		--rc geninfo_all_blocks=1
00:07:00.293  		--rc geninfo_unexecuted_blocks=1
00:07:00.293  		
00:07:00.293  		'
00:07:00.293    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:00.293  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.293  		--rc genhtml_branch_coverage=1
00:07:00.293  		--rc genhtml_function_coverage=1
00:07:00.293  		--rc genhtml_legend=1
00:07:00.293  		--rc geninfo_all_blocks=1
00:07:00.294  		--rc geninfo_unexecuted_blocks=1
00:07:00.294  		
00:07:00.294  		'
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:00.294  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.294  		--rc genhtml_branch_coverage=1
00:07:00.294  		--rc genhtml_function_coverage=1
00:07:00.294  		--rc genhtml_legend=1
00:07:00.294  		--rc geninfo_all_blocks=1
00:07:00.294  		--rc geninfo_unexecuted_blocks=1
00:07:00.294  		
00:07:00.294  		'
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:07:00.294     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:00.294     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:07:00.294     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob
00:07:00.294     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:00.294     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:00.294     03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:00.294      03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.294      03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.294      03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.294      03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH
00:07:00.294      03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:00.294  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:00.294    03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable
00:07:00.294   03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=()
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=()
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=()
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=()
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=()
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=()
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=()
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:07:02.198  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:07:02.198   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:07:02.199  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:07:02.199  Found net devices under 0000:0a:00.0: cvl_0_0
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:07:02.199  Found net devices under 0000:0a:00.1: cvl_0_1
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:07:02.199  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:02.199  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms
00:07:02.199  
00:07:02.199  --- 10.0.0.2 ping statistics ---
00:07:02.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:02.199  rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:07:02.199  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:02.199  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms
00:07:02.199  
00:07:02.199  --- 10.0.0.1 ping statistics ---
00:07:02.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:02.199  rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:07:02.199   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=135901
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 135901
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 135901 ']'
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:02.457  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:02.457   03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:07:02.457  [2024-12-09 03:57:30.834915] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:02.457  [2024-12-09 03:57:30.835008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:02.457  [2024-12-09 03:57:30.911769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:02.457  [2024-12-09 03:57:30.965191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:02.457  [2024-12-09 03:57:30.965251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:02.457  [2024-12-09 03:57:30.965281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:07:02.457  [2024-12-09 03:57:30.965294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:07:02.457  [2024-12-09 03:57:30.965303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:02.457  [2024-12-09 03:57:30.965882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:02.715   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:02.715   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0
00:07:02.715   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:07:02.715   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:02.715   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:07:02.715   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:02.715   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:07:02.973  [2024-12-09 03:57:31.343769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:07:02.973  ************************************
00:07:02.973  START TEST lvs_grow_clean
00:07:02.973  ************************************
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:02.973   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:02.973    03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:07:03.230   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:07:03.230    03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:07:03.488   03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:03.488    03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:03.488    03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:07:03.746   03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:07:03.746   03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:07:03.746    03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 34ecfbcc-a21c-49f9-918d-7098a215138a lvol 150
00:07:04.004   03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=498e530e-fad8-4e8d-b470-6d6b8c5c5f70
00:07:04.004   03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:04.004   03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:07:04.261  [2024-12-09 03:57:32.772719] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:07:04.261  [2024-12-09 03:57:32.772814] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:07:04.261  true
00:07:04.261    03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:04.261    03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:07:04.518   03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:07:04.518   03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:07:04.775   03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 498e530e-fad8-4e8d-b470-6d6b8c5c5f70
00:07:05.033   03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:07:05.291  [2024-12-09 03:57:33.843982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:05.291   03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=136335
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 136335 /var/tmp/bdevperf.sock
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 136335 ']'
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:07:05.856  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:07:05.856  [2024-12-09 03:57:34.169937] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:05.856  [2024-12-09 03:57:34.170011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136335 ]
00:07:05.856  [2024-12-09 03:57:34.239243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:05.856  [2024-12-09 03:57:34.298076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0
00:07:05.856   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:07:06.421  Nvme0n1
00:07:06.421   03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:07:06.683  [
00:07:06.683    {
00:07:06.683      "name": "Nvme0n1",
00:07:06.683      "aliases": [
00:07:06.683        "498e530e-fad8-4e8d-b470-6d6b8c5c5f70"
00:07:06.683      ],
00:07:06.683      "product_name": "NVMe disk",
00:07:06.683      "block_size": 4096,
00:07:06.683      "num_blocks": 38912,
00:07:06.683      "uuid": "498e530e-fad8-4e8d-b470-6d6b8c5c5f70",
00:07:06.683      "numa_id": 0,
00:07:06.683      "assigned_rate_limits": {
00:07:06.683        "rw_ios_per_sec": 0,
00:07:06.683        "rw_mbytes_per_sec": 0,
00:07:06.683        "r_mbytes_per_sec": 0,
00:07:06.683        "w_mbytes_per_sec": 0
00:07:06.683      },
00:07:06.683      "claimed": false,
00:07:06.683      "zoned": false,
00:07:06.683      "supported_io_types": {
00:07:06.683        "read": true,
00:07:06.683        "write": true,
00:07:06.683        "unmap": true,
00:07:06.683        "flush": true,
00:07:06.683        "reset": true,
00:07:06.683        "nvme_admin": true,
00:07:06.683        "nvme_io": true,
00:07:06.683        "nvme_io_md": false,
00:07:06.683        "write_zeroes": true,
00:07:06.683        "zcopy": false,
00:07:06.683        "get_zone_info": false,
00:07:06.683        "zone_management": false,
00:07:06.683        "zone_append": false,
00:07:06.683        "compare": true,
00:07:06.683        "compare_and_write": true,
00:07:06.683        "abort": true,
00:07:06.683        "seek_hole": false,
00:07:06.683        "seek_data": false,
00:07:06.683        "copy": true,
00:07:06.683        "nvme_iov_md": false
00:07:06.683      },
00:07:06.683      "memory_domains": [
00:07:06.683        {
00:07:06.683          "dma_device_id": "system",
00:07:06.683          "dma_device_type": 1
00:07:06.683        }
00:07:06.683      ],
00:07:06.683      "driver_specific": {
00:07:06.683        "nvme": [
00:07:06.683          {
00:07:06.683            "trid": {
00:07:06.683              "trtype": "TCP",
00:07:06.683              "adrfam": "IPv4",
00:07:06.683              "traddr": "10.0.0.2",
00:07:06.683              "trsvcid": "4420",
00:07:06.683              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:07:06.683            },
00:07:06.683            "ctrlr_data": {
00:07:06.683              "cntlid": 1,
00:07:06.683              "vendor_id": "0x8086",
00:07:06.683              "model_number": "SPDK bdev Controller",
00:07:06.683              "serial_number": "SPDK0",
00:07:06.683              "firmware_revision": "25.01",
00:07:06.683              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:07:06.683              "oacs": {
00:07:06.683                "security": 0,
00:07:06.683                "format": 0,
00:07:06.683                "firmware": 0,
00:07:06.683                "ns_manage": 0
00:07:06.683              },
00:07:06.683              "multi_ctrlr": true,
00:07:06.683              "ana_reporting": false
00:07:06.683            },
00:07:06.683            "vs": {
00:07:06.683              "nvme_version": "1.3"
00:07:06.683            },
00:07:06.683            "ns_data": {
00:07:06.683              "id": 1,
00:07:06.683              "can_share": true
00:07:06.683            }
00:07:06.683          }
00:07:06.683        ],
00:07:06.683        "mp_policy": "active_passive"
00:07:06.683      }
00:07:06.683    }
00:07:06.683  ]
00:07:06.683   03:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=136473
00:07:06.683   03:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:07:06.683   03:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:07:06.942  Running I/O for 10 seconds...
00:07:07.874                                                                                                  Latency(us)
00:07:07.874  
[2024-12-09T02:57:36.450Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:07.874  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:07.874  	 Nvme0n1             :       1.00   13686.00      53.46       0.00     0.00       0.00       0.00       0.00
00:07:07.874  
[2024-12-09T02:57:36.450Z]  ===================================================================================================================
00:07:07.874  
[2024-12-09T02:57:36.450Z]  Total                       :              13686.00      53.46       0.00     0.00       0.00       0.00       0.00
00:07:07.874  
00:07:08.806   03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:08.806  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:08.806  	 Nvme0n1             :       2.00   13739.00      53.67       0.00     0.00       0.00       0.00       0.00
00:07:08.806  
[2024-12-09T02:57:37.382Z]  ===================================================================================================================
00:07:08.806  
[2024-12-09T02:57:37.382Z]  Total                       :              13739.00      53.67       0.00     0.00       0.00       0.00       0.00
00:07:08.806  
00:07:09.066  true
00:07:09.066    03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:09.066    03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:07:09.325   03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:07:09.325   03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:07:09.325   03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 136473
00:07:09.894  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:09.894  	 Nvme0n1             :       3.00   13802.00      53.91       0.00     0.00       0.00       0.00       0.00
00:07:09.894  
[2024-12-09T02:57:38.470Z]  ===================================================================================================================
00:07:09.894  
[2024-12-09T02:57:38.470Z]  Total                       :              13802.00      53.91       0.00     0.00       0.00       0.00       0.00
00:07:09.894  
00:07:10.829  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:10.829  	 Nvme0n1             :       4.00   13891.50      54.26       0.00     0.00       0.00       0.00       0.00
00:07:10.829  
[2024-12-09T02:57:39.405Z]  ===================================================================================================================
00:07:10.829  
[2024-12-09T02:57:39.405Z]  Total                       :              13891.50      54.26       0.00     0.00       0.00       0.00       0.00
00:07:10.829  
00:07:11.762  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:11.763  	 Nvme0n1             :       5.00   13935.60      54.44       0.00     0.00       0.00       0.00       0.00
00:07:11.763  
[2024-12-09T02:57:40.339Z]  ===================================================================================================================
00:07:11.763  
[2024-12-09T02:57:40.339Z]  Total                       :              13935.60      54.44       0.00     0.00       0.00       0.00       0.00
00:07:11.763  
00:07:13.139  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:13.139  	 Nvme0n1             :       6.00   13951.67      54.50       0.00     0.00       0.00       0.00       0.00
00:07:13.139  
[2024-12-09T02:57:41.715Z]  ===================================================================================================================
00:07:13.139  
[2024-12-09T02:57:41.715Z]  Total                       :              13951.67      54.50       0.00     0.00       0.00       0.00       0.00
00:07:13.139  
00:07:14.073  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:14.073  	 Nvme0n1             :       7.00   13984.86      54.63       0.00     0.00       0.00       0.00       0.00
00:07:14.073  
[2024-12-09T02:57:42.649Z]  ===================================================================================================================
00:07:14.073  
[2024-12-09T02:57:42.649Z]  Total                       :              13984.86      54.63       0.00     0.00       0.00       0.00       0.00
00:07:14.073  
00:07:15.010  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:15.010  	 Nvme0n1             :       8.00   14020.75      54.77       0.00     0.00       0.00       0.00       0.00
00:07:15.010  
[2024-12-09T02:57:43.586Z]  ===================================================================================================================
00:07:15.010  
[2024-12-09T02:57:43.586Z]  Total                       :              14020.75      54.77       0.00     0.00       0.00       0.00       0.00
00:07:15.010  
00:07:15.945  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:15.945  	 Nvme0n1             :       9.00   14043.33      54.86       0.00     0.00       0.00       0.00       0.00
00:07:15.945  
[2024-12-09T02:57:44.521Z]  ===================================================================================================================
00:07:15.945  
[2024-12-09T02:57:44.521Z]  Total                       :              14043.33      54.86       0.00     0.00       0.00       0.00       0.00
00:07:15.945  
00:07:16.897  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:16.897  	 Nvme0n1             :      10.00   14051.80      54.89       0.00     0.00       0.00       0.00       0.00
00:07:16.897  
[2024-12-09T02:57:45.473Z]  ===================================================================================================================
00:07:16.897  
[2024-12-09T02:57:45.473Z]  Total                       :              14051.80      54.89       0.00     0.00       0.00       0.00       0.00
00:07:16.897  
00:07:16.897  
00:07:16.897                                                                                                  Latency(us)
00:07:16.897  
[2024-12-09T02:57:45.473Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:16.897  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:16.897  	 Nvme0n1             :      10.01   14051.98      54.89       0.00     0.00    9100.82    6359.42   15825.73
00:07:16.897  
[2024-12-09T02:57:45.473Z]  ===================================================================================================================
00:07:16.897  
[2024-12-09T02:57:45.473Z]  Total                       :              14051.98      54.89       0.00     0.00    9100.82    6359.42   15825.73
00:07:16.897  {
00:07:16.897    "results": [
00:07:16.897      {
00:07:16.897        "job": "Nvme0n1",
00:07:16.897        "core_mask": "0x2",
00:07:16.897        "workload": "randwrite",
00:07:16.897        "status": "finished",
00:07:16.897        "queue_depth": 128,
00:07:16.897        "io_size": 4096,
00:07:16.897        "runtime": 10.008413,
00:07:16.897        "iops": 14051.978070848994,
00:07:16.897        "mibps": 54.890539339253884,
00:07:16.897        "io_failed": 0,
00:07:16.897        "io_timeout": 0,
00:07:16.897        "avg_latency_us": 9100.820798340685,
00:07:16.897        "min_latency_us": 6359.419259259259,
00:07:16.897        "max_latency_us": 15825.730370370371
00:07:16.897      }
00:07:16.897    ],
00:07:16.897    "core_count": 1
00:07:16.897  }
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 136335
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 136335 ']'
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 136335
00:07:16.897    03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:16.897    03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136335
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136335'
00:07:16.897  killing process with pid 136335
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 136335
00:07:16.897  Received shutdown signal, test time was about 10.000000 seconds
00:07:16.897  
00:07:16.897                                                                                                  Latency(us)
00:07:16.897  
[2024-12-09T02:57:45.473Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:16.897  
[2024-12-09T02:57:45.473Z]  ===================================================================================================================
00:07:16.897  
[2024-12-09T02:57:45.473Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:07:16.897   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 136335
00:07:17.155   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:07:17.413   03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:07:17.671    03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:17.671    03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:07:17.929   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:07:17.929   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]]
00:07:17.929   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:07:18.187  [2024-12-09 03:57:46.655188] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:18.187    03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:18.187    03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:07:18.187   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:18.445  request:
00:07:18.445  {
00:07:18.445    "uuid": "34ecfbcc-a21c-49f9-918d-7098a215138a",
00:07:18.445    "method": "bdev_lvol_get_lvstores",
00:07:18.445    "req_id": 1
00:07:18.445  }
00:07:18.445  Got JSON-RPC error response
00:07:18.445  response:
00:07:18.445  {
00:07:18.445    "code": -19,
00:07:18.445    "message": "No such device"
00:07:18.445  }
00:07:18.445   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1
00:07:18.445   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:18.445   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:18.445   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:18.445   03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:07:18.703  aio_bdev
00:07:18.703   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 498e530e-fad8-4e8d-b470-6d6b8c5c5f70
00:07:18.703   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=498e530e-fad8-4e8d-b470-6d6b8c5c5f70
00:07:18.703   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:07:18.703   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i
00:07:18.703   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:07:18.703   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:07:18.703   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:07:18.961   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 498e530e-fad8-4e8d-b470-6d6b8c5c5f70 -t 2000
00:07:19.220  [
00:07:19.220    {
00:07:19.220      "name": "498e530e-fad8-4e8d-b470-6d6b8c5c5f70",
00:07:19.220      "aliases": [
00:07:19.220        "lvs/lvol"
00:07:19.220      ],
00:07:19.220      "product_name": "Logical Volume",
00:07:19.220      "block_size": 4096,
00:07:19.220      "num_blocks": 38912,
00:07:19.220      "uuid": "498e530e-fad8-4e8d-b470-6d6b8c5c5f70",
00:07:19.220      "assigned_rate_limits": {
00:07:19.220        "rw_ios_per_sec": 0,
00:07:19.220        "rw_mbytes_per_sec": 0,
00:07:19.220        "r_mbytes_per_sec": 0,
00:07:19.220        "w_mbytes_per_sec": 0
00:07:19.220      },
00:07:19.220      "claimed": false,
00:07:19.220      "zoned": false,
00:07:19.220      "supported_io_types": {
00:07:19.220        "read": true,
00:07:19.220        "write": true,
00:07:19.220        "unmap": true,
00:07:19.220        "flush": false,
00:07:19.220        "reset": true,
00:07:19.220        "nvme_admin": false,
00:07:19.220        "nvme_io": false,
00:07:19.220        "nvme_io_md": false,
00:07:19.220        "write_zeroes": true,
00:07:19.220        "zcopy": false,
00:07:19.220        "get_zone_info": false,
00:07:19.220        "zone_management": false,
00:07:19.220        "zone_append": false,
00:07:19.220        "compare": false,
00:07:19.220        "compare_and_write": false,
00:07:19.220        "abort": false,
00:07:19.220        "seek_hole": true,
00:07:19.220        "seek_data": true,
00:07:19.220        "copy": false,
00:07:19.220        "nvme_iov_md": false
00:07:19.220      },
00:07:19.220      "driver_specific": {
00:07:19.220        "lvol": {
00:07:19.220          "lvol_store_uuid": "34ecfbcc-a21c-49f9-918d-7098a215138a",
00:07:19.220          "base_bdev": "aio_bdev",
00:07:19.220          "thin_provision": false,
00:07:19.220          "num_allocated_clusters": 38,
00:07:19.220          "snapshot": false,
00:07:19.220          "clone": false,
00:07:19.220          "esnap_clone": false
00:07:19.220        }
00:07:19.220      }
00:07:19.220    }
00:07:19.220  ]
00:07:19.220   03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0
00:07:19.220    03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:19.220    03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:07:19.477   03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:07:19.733    03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:19.733    03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:07:19.990   03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:07:19.990   03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 498e530e-fad8-4e8d-b470-6d6b8c5c5f70
00:07:20.247   03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34ecfbcc-a21c-49f9-918d-7098a215138a
00:07:20.505   03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:20.763  
00:07:20.763  real	0m17.788s
00:07:20.763  user	0m17.255s
00:07:20.763  sys	0m1.897s
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:07:20.763  ************************************
00:07:20.763  END TEST lvs_grow_clean
00:07:20.763  ************************************
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:07:20.763  ************************************
00:07:20.763  START TEST lvs_grow_dirty
00:07:20.763  ************************************
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:20.763   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:20.763    03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:07:21.021   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:07:21.021    03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:07:21.278   03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:21.278    03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:21.278    03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:07:21.535   03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:07:21.535   03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:07:21.535    03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 lvol 150
00:07:21.792   03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b740548-bc67-4906-8bb3-da9947314eed
00:07:21.793   03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:21.793   03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:07:22.051  [2024-12-09 03:57:50.613859] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:07:22.051  [2024-12-09 03:57:50.613955] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:07:22.051  true
00:07:22.309    03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:22.309    03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:07:22.567   03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:07:22.567   03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:07:22.825   03:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b740548-bc67-4906-8bb3-da9947314eed
00:07:23.096   03:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:07:23.355  [2024-12-09 03:57:51.701127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:23.355   03:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=138531
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 138531 /var/tmp/bdevperf.sock
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 138531 ']'
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:07:23.614  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:23.614   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:07:23.614  [2024-12-09 03:57:52.075224] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:23.614  [2024-12-09 03:57:52.075335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138531 ]
00:07:23.614  [2024-12-09 03:57:52.141362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:23.872  [2024-12-09 03:57:52.198353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:23.872   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:23.872   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:07:23.872   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:07:24.130  Nvme0n1
00:07:24.130   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:07:24.388  [
00:07:24.388    {
00:07:24.388      "name": "Nvme0n1",
00:07:24.388      "aliases": [
00:07:24.388        "1b740548-bc67-4906-8bb3-da9947314eed"
00:07:24.388      ],
00:07:24.388      "product_name": "NVMe disk",
00:07:24.388      "block_size": 4096,
00:07:24.388      "num_blocks": 38912,
00:07:24.388      "uuid": "1b740548-bc67-4906-8bb3-da9947314eed",
00:07:24.388      "numa_id": 0,
00:07:24.388      "assigned_rate_limits": {
00:07:24.388        "rw_ios_per_sec": 0,
00:07:24.388        "rw_mbytes_per_sec": 0,
00:07:24.388        "r_mbytes_per_sec": 0,
00:07:24.388        "w_mbytes_per_sec": 0
00:07:24.388      },
00:07:24.388      "claimed": false,
00:07:24.388      "zoned": false,
00:07:24.388      "supported_io_types": {
00:07:24.388        "read": true,
00:07:24.388        "write": true,
00:07:24.388        "unmap": true,
00:07:24.388        "flush": true,
00:07:24.388        "reset": true,
00:07:24.388        "nvme_admin": true,
00:07:24.388        "nvme_io": true,
00:07:24.388        "nvme_io_md": false,
00:07:24.388        "write_zeroes": true,
00:07:24.388        "zcopy": false,
00:07:24.388        "get_zone_info": false,
00:07:24.388        "zone_management": false,
00:07:24.388        "zone_append": false,
00:07:24.388        "compare": true,
00:07:24.388        "compare_and_write": true,
00:07:24.388        "abort": true,
00:07:24.388        "seek_hole": false,
00:07:24.388        "seek_data": false,
00:07:24.388        "copy": true,
00:07:24.388        "nvme_iov_md": false
00:07:24.388      },
00:07:24.388      "memory_domains": [
00:07:24.388        {
00:07:24.388          "dma_device_id": "system",
00:07:24.388          "dma_device_type": 1
00:07:24.388        }
00:07:24.388      ],
00:07:24.388      "driver_specific": {
00:07:24.388        "nvme": [
00:07:24.388          {
00:07:24.388            "trid": {
00:07:24.388              "trtype": "TCP",
00:07:24.388              "adrfam": "IPv4",
00:07:24.388              "traddr": "10.0.0.2",
00:07:24.388              "trsvcid": "4420",
00:07:24.388              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:07:24.388            },
00:07:24.388            "ctrlr_data": {
00:07:24.388              "cntlid": 1,
00:07:24.388              "vendor_id": "0x8086",
00:07:24.388              "model_number": "SPDK bdev Controller",
00:07:24.388              "serial_number": "SPDK0",
00:07:24.388              "firmware_revision": "25.01",
00:07:24.388              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:07:24.388              "oacs": {
00:07:24.388                "security": 0,
00:07:24.388                "format": 0,
00:07:24.388                "firmware": 0,
00:07:24.388                "ns_manage": 0
00:07:24.388              },
00:07:24.388              "multi_ctrlr": true,
00:07:24.388              "ana_reporting": false
00:07:24.388            },
00:07:24.388            "vs": {
00:07:24.388              "nvme_version": "1.3"
00:07:24.388            },
00:07:24.389            "ns_data": {
00:07:24.389              "id": 1,
00:07:24.389              "can_share": true
00:07:24.389            }
00:07:24.389          }
00:07:24.389        ],
00:07:24.389        "mp_policy": "active_passive"
00:07:24.389      }
00:07:24.389    }
00:07:24.389  ]
00:07:24.389   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=138550
00:07:24.389   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:07:24.389   03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:07:24.646  Running I/O for 10 seconds...
00:07:25.580                                                                                                  Latency(us)
00:07:25.580  
[2024-12-09T02:57:54.156Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:25.580  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:25.580  	 Nvme0n1             :       1.00   14987.00      58.54       0.00     0.00       0.00       0.00       0.00
00:07:25.580  
[2024-12-09T02:57:54.156Z]  ===================================================================================================================
00:07:25.580  
[2024-12-09T02:57:54.156Z]  Total                       :              14987.00      58.54       0.00     0.00       0.00       0.00       0.00
00:07:25.580  
00:07:26.513   03:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:26.513  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:26.513  	 Nvme0n1             :       2.00   15177.00      59.29       0.00     0.00       0.00       0.00       0.00
00:07:26.513  
[2024-12-09T02:57:55.089Z]  ===================================================================================================================
00:07:26.513  
[2024-12-09T02:57:55.089Z]  Total                       :              15177.00      59.29       0.00     0.00       0.00       0.00       0.00
00:07:26.513  
00:07:26.776  true
00:07:26.776    03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:26.776    03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:07:27.034   03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:07:27.034   03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:07:27.034   03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 138550
00:07:27.599  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:27.599  	 Nvme0n1             :       3.00   15261.67      59.62       0.00     0.00       0.00       0.00       0.00
00:07:27.599  
[2024-12-09T02:57:56.175Z]  ===================================================================================================================
00:07:27.599  
[2024-12-09T02:57:56.175Z]  Total                       :              15261.67      59.62       0.00     0.00       0.00       0.00       0.00
00:07:27.599  
00:07:28.531  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:28.531  	 Nvme0n1             :       4.00   15367.50      60.03       0.00     0.00       0.00       0.00       0.00
00:07:28.531  
[2024-12-09T02:57:57.107Z]  ===================================================================================================================
00:07:28.531  
[2024-12-09T02:57:57.107Z]  Total                       :              15367.50      60.03       0.00     0.00       0.00       0.00       0.00
00:07:28.531  
00:07:29.906  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:29.906  	 Nvme0n1             :       5.00   15431.20      60.28       0.00     0.00       0.00       0.00       0.00
00:07:29.906  
[2024-12-09T02:57:58.482Z]  ===================================================================================================================
00:07:29.906  
[2024-12-09T02:57:58.482Z]  Total                       :              15431.20      60.28       0.00     0.00       0.00       0.00       0.00
00:07:29.906  
00:07:30.842  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:30.842  	 Nvme0n1             :       6.00   15494.83      60.53       0.00     0.00       0.00       0.00       0.00
00:07:30.842  
[2024-12-09T02:57:59.418Z]  ===================================================================================================================
00:07:30.842  
[2024-12-09T02:57:59.418Z]  Total                       :              15494.83      60.53       0.00     0.00       0.00       0.00       0.00
00:07:30.842  
00:07:31.776  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:31.776  	 Nvme0n1             :       7.00   15535.86      60.69       0.00     0.00       0.00       0.00       0.00
00:07:31.776  
[2024-12-09T02:58:00.352Z]  ===================================================================================================================
00:07:31.776  
[2024-12-09T02:58:00.352Z]  Total                       :              15535.86      60.69       0.00     0.00       0.00       0.00       0.00
00:07:31.776  
00:07:32.710  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:32.710  	 Nvme0n1             :       8.00   15562.38      60.79       0.00     0.00       0.00       0.00       0.00
00:07:32.710  
[2024-12-09T02:58:01.286Z]  ===================================================================================================================
00:07:32.710  
[2024-12-09T02:58:01.286Z]  Total                       :              15562.38      60.79       0.00     0.00       0.00       0.00       0.00
00:07:32.710  
00:07:33.647  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:33.647  	 Nvme0n1             :       9.00   15597.11      60.93       0.00     0.00       0.00       0.00       0.00
00:07:33.647  
[2024-12-09T02:58:02.223Z]  ===================================================================================================================
00:07:33.647  
[2024-12-09T02:58:02.223Z]  Total                       :              15597.11      60.93       0.00     0.00       0.00       0.00       0.00
00:07:33.647  
00:07:34.583  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:34.583  	 Nvme0n1             :      10.00   15631.30      61.06       0.00     0.00       0.00       0.00       0.00
00:07:34.583  
[2024-12-09T02:58:03.159Z]  ===================================================================================================================
00:07:34.583  
[2024-12-09T02:58:03.159Z]  Total                       :              15631.30      61.06       0.00     0.00       0.00       0.00       0.00
00:07:34.583  
00:07:34.583  
00:07:34.583                                                                                                  Latency(us)
00:07:34.583  
[2024-12-09T02:58:03.160Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:34.584  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:07:34.584  	 Nvme0n1             :      10.00   15630.48      61.06       0.00     0.00    8183.93    4247.70   17573.36
00:07:34.584  
[2024-12-09T02:58:03.160Z]  ===================================================================================================================
00:07:34.584  
[2024-12-09T02:58:03.160Z]  Total                       :              15630.48      61.06       0.00     0.00    8183.93    4247.70   17573.36
00:07:34.584  {
00:07:34.584    "results": [
00:07:34.584      {
00:07:34.584        "job": "Nvme0n1",
00:07:34.584        "core_mask": "0x2",
00:07:34.584        "workload": "randwrite",
00:07:34.584        "status": "finished",
00:07:34.584        "queue_depth": 128,
00:07:34.584        "io_size": 4096,
00:07:34.584        "runtime": 10.004619,
00:07:34.584        "iops": 15630.480281158134,
00:07:34.584        "mibps": 61.05656359827396,
00:07:34.584        "io_failed": 0,
00:07:34.584        "io_timeout": 0,
00:07:34.584        "avg_latency_us": 8183.929029631383,
00:07:34.584        "min_latency_us": 4247.7037037037035,
00:07:34.584        "max_latency_us": 17573.357037037036
00:07:34.584      }
00:07:34.584    ],
00:07:34.584    "core_count": 1
00:07:34.584  }
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 138531
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 138531 ']'
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 138531
00:07:34.584    03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:34.584    03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138531
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138531'
00:07:34.584  killing process with pid 138531
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 138531
00:07:34.584  Received shutdown signal, test time was about 10.000000 seconds
00:07:34.584  
00:07:34.584                                                                                                  Latency(us)
00:07:34.584  
[2024-12-09T02:58:03.160Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:34.584  
[2024-12-09T02:58:03.160Z]  ===================================================================================================================
00:07:34.584  
[2024-12-09T02:58:03.160Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:07:34.584   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 138531
00:07:34.843   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:07:35.102   03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:07:35.361    03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:35.361    03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]]
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 135901
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 135901
00:07:35.621  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 135901 Killed                  "${NVMF_APP[@]}" "$@"
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=139889
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 139889
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 139889 ']'
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:35.621  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:35.621   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:07:35.880  [2024-12-09 03:58:04.246303] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:35.880  [2024-12-09 03:58:04.246376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:35.880  [2024-12-09 03:58:04.314165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:35.880  [2024-12-09 03:58:04.368815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:35.880  [2024-12-09 03:58:04.368877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:35.880  [2024-12-09 03:58:04.368900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:07:35.880  [2024-12-09 03:58:04.368910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:07:35.880  [2024-12-09 03:58:04.368919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:35.880  [2024-12-09 03:58:04.369516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:36.139   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:36.139   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:07:36.139   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:07:36.139   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:36.139   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:07:36.139   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:36.139    03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:07:36.398  [2024-12-09 03:58:04.753558] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:07:36.398  [2024-12-09 03:58:04.753682] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:07:36.398  [2024-12-09 03:58:04.753727] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1b740548-bc67-4906-8bb3-da9947314eed
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1b740548-bc67-4906-8bb3-da9947314eed
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:07:36.398   03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:07:36.657   03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b740548-bc67-4906-8bb3-da9947314eed -t 2000
00:07:36.914  [
00:07:36.914    {
00:07:36.914      "name": "1b740548-bc67-4906-8bb3-da9947314eed",
00:07:36.914      "aliases": [
00:07:36.914        "lvs/lvol"
00:07:36.914      ],
00:07:36.914      "product_name": "Logical Volume",
00:07:36.914      "block_size": 4096,
00:07:36.914      "num_blocks": 38912,
00:07:36.914      "uuid": "1b740548-bc67-4906-8bb3-da9947314eed",
00:07:36.914      "assigned_rate_limits": {
00:07:36.914        "rw_ios_per_sec": 0,
00:07:36.914        "rw_mbytes_per_sec": 0,
00:07:36.914        "r_mbytes_per_sec": 0,
00:07:36.914        "w_mbytes_per_sec": 0
00:07:36.914      },
00:07:36.914      "claimed": false,
00:07:36.914      "zoned": false,
00:07:36.914      "supported_io_types": {
00:07:36.914        "read": true,
00:07:36.914        "write": true,
00:07:36.914        "unmap": true,
00:07:36.914        "flush": false,
00:07:36.914        "reset": true,
00:07:36.914        "nvme_admin": false,
00:07:36.914        "nvme_io": false,
00:07:36.914        "nvme_io_md": false,
00:07:36.914        "write_zeroes": true,
00:07:36.914        "zcopy": false,
00:07:36.914        "get_zone_info": false,
00:07:36.914        "zone_management": false,
00:07:36.914        "zone_append": false,
00:07:36.914        "compare": false,
00:07:36.914        "compare_and_write": false,
00:07:36.914        "abort": false,
00:07:36.914        "seek_hole": true,
00:07:36.914        "seek_data": true,
00:07:36.914        "copy": false,
00:07:36.914        "nvme_iov_md": false
00:07:36.914      },
00:07:36.914      "driver_specific": {
00:07:36.914        "lvol": {
00:07:36.914          "lvol_store_uuid": "c7d1e89a-5807-4f63-bddd-953bfeeb50f6",
00:07:36.914          "base_bdev": "aio_bdev",
00:07:36.914          "thin_provision": false,
00:07:36.914          "num_allocated_clusters": 38,
00:07:36.914          "snapshot": false,
00:07:36.914          "clone": false,
00:07:36.914          "esnap_clone": false
00:07:36.914        }
00:07:36.914      }
00:07:36.914    }
00:07:36.914  ]
00:07:36.914   03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:07:36.914    03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:36.914    03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters'
00:07:37.172   03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 ))
00:07:37.172    03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:37.172    03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters'
00:07:37.429   03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 ))
00:07:37.430   03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:07:37.687  [2024-12-09 03:58:06.123418] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:37.687    03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:37.687    03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:07:37.687   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:37.946  request:
00:07:37.947  {
00:07:37.947    "uuid": "c7d1e89a-5807-4f63-bddd-953bfeeb50f6",
00:07:37.947    "method": "bdev_lvol_get_lvstores",
00:07:37.947    "req_id": 1
00:07:37.947  }
00:07:37.947  Got JSON-RPC error response
00:07:37.947  response:
00:07:37.947  {
00:07:37.947    "code": -19,
00:07:37.947    "message": "No such device"
00:07:37.947  }
00:07:37.947   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1
00:07:37.947   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:37.947   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:37.947   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:37.947   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:07:38.206  aio_bdev
00:07:38.206   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b740548-bc67-4906-8bb3-da9947314eed
00:07:38.206   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1b740548-bc67-4906-8bb3-da9947314eed
00:07:38.206   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:07:38.206   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:07:38.206   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:07:38.206   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:07:38.206   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:07:38.465   03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b740548-bc67-4906-8bb3-da9947314eed -t 2000
00:07:38.724  [
00:07:38.724    {
00:07:38.724      "name": "1b740548-bc67-4906-8bb3-da9947314eed",
00:07:38.724      "aliases": [
00:07:38.724        "lvs/lvol"
00:07:38.724      ],
00:07:38.724      "product_name": "Logical Volume",
00:07:38.724      "block_size": 4096,
00:07:38.724      "num_blocks": 38912,
00:07:38.724      "uuid": "1b740548-bc67-4906-8bb3-da9947314eed",
00:07:38.724      "assigned_rate_limits": {
00:07:38.724        "rw_ios_per_sec": 0,
00:07:38.724        "rw_mbytes_per_sec": 0,
00:07:38.724        "r_mbytes_per_sec": 0,
00:07:38.724        "w_mbytes_per_sec": 0
00:07:38.724      },
00:07:38.724      "claimed": false,
00:07:38.724      "zoned": false,
00:07:38.724      "supported_io_types": {
00:07:38.724        "read": true,
00:07:38.724        "write": true,
00:07:38.724        "unmap": true,
00:07:38.724        "flush": false,
00:07:38.724        "reset": true,
00:07:38.724        "nvme_admin": false,
00:07:38.724        "nvme_io": false,
00:07:38.724        "nvme_io_md": false,
00:07:38.724        "write_zeroes": true,
00:07:38.724        "zcopy": false,
00:07:38.724        "get_zone_info": false,
00:07:38.724        "zone_management": false,
00:07:38.724        "zone_append": false,
00:07:38.724        "compare": false,
00:07:38.724        "compare_and_write": false,
00:07:38.724        "abort": false,
00:07:38.724        "seek_hole": true,
00:07:38.724        "seek_data": true,
00:07:38.724        "copy": false,
00:07:38.724        "nvme_iov_md": false
00:07:38.724      },
00:07:38.724      "driver_specific": {
00:07:38.724        "lvol": {
00:07:38.724          "lvol_store_uuid": "c7d1e89a-5807-4f63-bddd-953bfeeb50f6",
00:07:38.724          "base_bdev": "aio_bdev",
00:07:38.724          "thin_provision": false,
00:07:38.724          "num_allocated_clusters": 38,
00:07:38.724          "snapshot": false,
00:07:38.724          "clone": false,
00:07:38.724          "esnap_clone": false
00:07:38.724        }
00:07:38.724      }
00:07:38.724    }
00:07:38.724  ]
00:07:38.724   03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:07:38.724    03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:38.724    03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:07:38.982   03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:07:38.982    03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:38.982    03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:07:39.240   03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:07:39.240   03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b740548-bc67-4906-8bb3-da9947314eed
00:07:39.498   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6
00:07:40.064   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:07:40.064   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:07:40.064  
00:07:40.064  real	0m19.398s
00:07:40.064  user	0m49.064s
00:07:40.064  sys	0m4.770s
00:07:40.064   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:40.064   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:07:40.064  ************************************
00:07:40.064  END TEST lvs_grow_dirty
00:07:40.064  ************************************
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:07:40.322    03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:07:40.322  nvmf_trace.0
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20}
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:07:40.322  rmmod nvme_tcp
00:07:40.322  rmmod nvme_fabrics
00:07:40.322  rmmod nvme_keyring
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 139889 ']'
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 139889
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 139889 ']'
00:07:40.322   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 139889
00:07:40.322    03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname
00:07:40.323   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:40.323    03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139889
00:07:40.323   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:40.323   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:40.323   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139889'
00:07:40.323  killing process with pid 139889
00:07:40.323   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 139889
00:07:40.323   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 139889
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:40.581   03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:40.581    03:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:42.494   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:07:42.494  
00:07:42.494  real	0m42.706s
00:07:42.494  user	1m12.364s
00:07:42.494  sys	0m8.659s
00:07:42.494   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:42.494   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:07:42.494  ************************************
00:07:42.494  END TEST nvmf_lvs_grow
00:07:42.494  ************************************
00:07:42.494   03:58:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:07:42.494   03:58:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:42.494   03:58:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:42.494   03:58:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:07:42.754  ************************************
00:07:42.754  START TEST nvmf_bdev_io_wait
00:07:42.754  ************************************
00:07:42.754   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:07:42.754  * Looking for test storage...
00:07:42.754  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-:
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-:
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<'
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:42.754     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0
00:07:42.754    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:42.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:42.755  		--rc genhtml_branch_coverage=1
00:07:42.755  		--rc genhtml_function_coverage=1
00:07:42.755  		--rc genhtml_legend=1
00:07:42.755  		--rc geninfo_all_blocks=1
00:07:42.755  		--rc geninfo_unexecuted_blocks=1
00:07:42.755  		
00:07:42.755  		'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:42.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:42.755  		--rc genhtml_branch_coverage=1
00:07:42.755  		--rc genhtml_function_coverage=1
00:07:42.755  		--rc genhtml_legend=1
00:07:42.755  		--rc geninfo_all_blocks=1
00:07:42.755  		--rc geninfo_unexecuted_blocks=1
00:07:42.755  		
00:07:42.755  		'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:42.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:42.755  		--rc genhtml_branch_coverage=1
00:07:42.755  		--rc genhtml_function_coverage=1
00:07:42.755  		--rc genhtml_legend=1
00:07:42.755  		--rc geninfo_all_blocks=1
00:07:42.755  		--rc geninfo_unexecuted_blocks=1
00:07:42.755  		
00:07:42.755  		'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:42.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:42.755  		--rc genhtml_branch_coverage=1
00:07:42.755  		--rc genhtml_function_coverage=1
00:07:42.755  		--rc genhtml_legend=1
00:07:42.755  		--rc geninfo_all_blocks=1
00:07:42.755  		--rc geninfo_unexecuted_blocks=1
00:07:42.755  		
00:07:42.755  		'
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:07:42.755     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:42.755     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:07:42.755     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob
00:07:42.755     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:42.755     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:42.755     03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:42.755      03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:42.755      03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:42.755      03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:42.755      03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH
00:07:42.755      03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:42.755  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:42.755    03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable
00:07:42.755   03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=()
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=()
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=()
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=()
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=()
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=()
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=()
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:07:45.296  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:07:45.296  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:07:45.296   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:07:45.297  Found net devices under 0000:0a:00.0: cvl_0_0
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:07:45.297  Found net devices under 0000:0a:00.1: cvl_0_1
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:07:45.297  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:45.297  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms
00:07:45.297  
00:07:45.297  --- 10.0.0.2 ping statistics ---
00:07:45.297  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:45.297  rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:07:45.297  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:45.297  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms
00:07:45.297  
00:07:45.297  --- 10.0.0.1 ping statistics ---
00:07:45.297  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:45.297  rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=142545
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 142545
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 142545 ']'
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:45.297  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:45.297   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.297  [2024-12-09 03:58:13.715029] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:45.297  [2024-12-09 03:58:13.715113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:45.297  [2024-12-09 03:58:13.787552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:45.297  [2024-12-09 03:58:13.850008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:45.297  [2024-12-09 03:58:13.850073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:45.297  [2024-12-09 03:58:13.850087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:07:45.297  [2024-12-09 03:58:13.850098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:07:45.297  [2024-12-09 03:58:13.850111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:45.297  [2024-12-09 03:58:13.851766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:45.297  [2024-12-09 03:58:13.855291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:45.297  [2024-12-09 03:58:13.855370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:45.297  [2024-12-09 03:58:13.859302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:45.557   03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557  [2024-12-09 03:58:14.069481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557  Malloc0
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:45.557  [2024-12-09 03:58:14.122797] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=142577
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=142578
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=142581
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:07:45.557  {
00:07:45.557    "params": {
00:07:45.557      "name": "Nvme$subsystem",
00:07:45.557      "trtype": "$TEST_TRANSPORT",
00:07:45.557      "traddr": "$NVMF_FIRST_TARGET_IP",
00:07:45.557      "adrfam": "ipv4",
00:07:45.557      "trsvcid": "$NVMF_PORT",
00:07:45.557      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:07:45.557      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:07:45.557      "hdgst": ${hdgst:-false},
00:07:45.557      "ddgst": ${ddgst:-false}
00:07:45.557    },
00:07:45.557    "method": "bdev_nvme_attach_controller"
00:07:45.557  }
00:07:45.557  EOF
00:07:45.557  )")
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:07:45.557  {
00:07:45.557    "params": {
00:07:45.557      "name": "Nvme$subsystem",
00:07:45.557      "trtype": "$TEST_TRANSPORT",
00:07:45.557      "traddr": "$NVMF_FIRST_TARGET_IP",
00:07:45.557      "adrfam": "ipv4",
00:07:45.557      "trsvcid": "$NVMF_PORT",
00:07:45.557      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:07:45.557      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:07:45.557      "hdgst": ${hdgst:-false},
00:07:45.557      "ddgst": ${ddgst:-false}
00:07:45.557    },
00:07:45.557    "method": "bdev_nvme_attach_controller"
00:07:45.557  }
00:07:45.557  EOF
00:07:45.557  )")
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=142583
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:07:45.557     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:07:45.557  {
00:07:45.557    "params": {
00:07:45.557      "name": "Nvme$subsystem",
00:07:45.557      "trtype": "$TEST_TRANSPORT",
00:07:45.557      "traddr": "$NVMF_FIRST_TARGET_IP",
00:07:45.557      "adrfam": "ipv4",
00:07:45.557      "trsvcid": "$NVMF_PORT",
00:07:45.557      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:07:45.557      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:07:45.557      "hdgst": ${hdgst:-false},
00:07:45.557      "ddgst": ${ddgst:-false}
00:07:45.557    },
00:07:45.557    "method": "bdev_nvme_attach_controller"
00:07:45.557  }
00:07:45.557  EOF
00:07:45.557  )")
00:07:45.557   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:07:45.557     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:07:45.557    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:07:45.558    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:07:45.558    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:07:45.558  {
00:07:45.558    "params": {
00:07:45.558      "name": "Nvme$subsystem",
00:07:45.558      "trtype": "$TEST_TRANSPORT",
00:07:45.558      "traddr": "$NVMF_FIRST_TARGET_IP",
00:07:45.558      "adrfam": "ipv4",
00:07:45.558      "trsvcid": "$NVMF_PORT",
00:07:45.558      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:07:45.558      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:07:45.558      "hdgst": ${hdgst:-false},
00:07:45.558      "ddgst": ${ddgst:-false}
00:07:45.558    },
00:07:45.558    "method": "bdev_nvme_attach_controller"
00:07:45.558  }
00:07:45.558  EOF
00:07:45.558  )")
00:07:45.558     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:07:45.558   03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 142577
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:07:45.817    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:07:45.817    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:07:45.817    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:07:45.817    "params": {
00:07:45.817      "name": "Nvme1",
00:07:45.817      "trtype": "tcp",
00:07:45.817      "traddr": "10.0.0.2",
00:07:45.817      "adrfam": "ipv4",
00:07:45.817      "trsvcid": "4420",
00:07:45.817      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:07:45.817      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:07:45.817      "hdgst": false,
00:07:45.817      "ddgst": false
00:07:45.817    },
00:07:45.817    "method": "bdev_nvme_attach_controller"
00:07:45.817  }'
00:07:45.817    03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:07:45.817    "params": {
00:07:45.817      "name": "Nvme1",
00:07:45.817      "trtype": "tcp",
00:07:45.817      "traddr": "10.0.0.2",
00:07:45.817      "adrfam": "ipv4",
00:07:45.817      "trsvcid": "4420",
00:07:45.817      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:07:45.817      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:07:45.817      "hdgst": false,
00:07:45.817      "ddgst": false
00:07:45.817    },
00:07:45.817    "method": "bdev_nvme_attach_controller"
00:07:45.817  }'
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:07:45.817    "params": {
00:07:45.817      "name": "Nvme1",
00:07:45.817      "trtype": "tcp",
00:07:45.817      "traddr": "10.0.0.2",
00:07:45.817      "adrfam": "ipv4",
00:07:45.817      "trsvcid": "4420",
00:07:45.817      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:07:45.817      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:07:45.817      "hdgst": false,
00:07:45.817      "ddgst": false
00:07:45.817    },
00:07:45.817    "method": "bdev_nvme_attach_controller"
00:07:45.817  }'
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:07:45.817     03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:07:45.817    "params": {
00:07:45.817      "name": "Nvme1",
00:07:45.817      "trtype": "tcp",
00:07:45.817      "traddr": "10.0.0.2",
00:07:45.817      "adrfam": "ipv4",
00:07:45.817      "trsvcid": "4420",
00:07:45.817      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:07:45.817      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:07:45.817      "hdgst": false,
00:07:45.817      "ddgst": false
00:07:45.817    },
00:07:45.817    "method": "bdev_nvme_attach_controller"
00:07:45.817  }'
00:07:45.817  [2024-12-09 03:58:14.174513] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:45.817  [2024-12-09 03:58:14.174623] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:07:45.817  [2024-12-09 03:58:14.174696] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:45.817  [2024-12-09 03:58:14.174696] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:45.817  [2024-12-09 03:58:14.174696] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:45.817  [2024-12-09 03:58:14.174771] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 03:58:14.174771] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 03:58:14.174772] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:07:45.817  --proc-type=auto ]
00:07:45.817  --proc-type=auto ]
00:07:45.817  [2024-12-09 03:58:14.358909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.077  [2024-12-09 03:58:14.412423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:07:46.077  [2024-12-09 03:58:14.457389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.077  [2024-12-09 03:58:14.513597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:07:46.077  [2024-12-09 03:58:14.561645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.077  [2024-12-09 03:58:14.618185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:07:46.077  [2024-12-09 03:58:14.633890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.337  [2024-12-09 03:58:14.684597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:07:46.337  Running I/O for 1 seconds...
00:07:46.337  Running I/O for 1 seconds...
00:07:46.337  Running I/O for 1 seconds...
00:07:46.337  Running I/O for 1 seconds...
00:07:47.274       5883.00 IOPS,    22.98 MiB/s
00:07:47.274                                                                                                  Latency(us)
00:07:47.274  
[2024-12-09T02:58:15.850Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:47.274  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:07:47.274  	 Nvme1n1             :       1.02    5894.95      23.03       0.00     0.00   21569.96    7281.78   31457.28
00:07:47.274  
[2024-12-09T02:58:15.850Z]  ===================================================================================================================
00:07:47.274  
[2024-12-09T02:58:15.850Z]  Total                       :               5894.95      23.03       0.00     0.00   21569.96    7281.78   31457.28
00:07:47.274     185784.00 IOPS,   725.72 MiB/s
00:07:47.274                                                                                                  Latency(us)
00:07:47.274  
[2024-12-09T02:58:15.850Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:47.274  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:07:47.274  	 Nvme1n1             :       1.00  185437.44     724.37       0.00     0.00     686.51     288.24    1844.72
00:07:47.274  
[2024-12-09T02:58:15.850Z]  ===================================================================================================================
00:07:47.274  
[2024-12-09T02:58:15.850Z]  Total                       :             185437.44     724.37       0.00     0.00     686.51     288.24    1844.72
00:07:47.533       5864.00 IOPS,    22.91 MiB/s
00:07:47.533                                                                                                  Latency(us)
00:07:47.533  
[2024-12-09T02:58:16.109Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:47.533  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:07:47.533  	 Nvme1n1             :       1.01    5971.26      23.33       0.00     0.00   21370.33    4466.16   44467.39
00:07:47.533  
[2024-12-09T02:58:16.109Z]  ===================================================================================================================
00:07:47.533  
[2024-12-09T02:58:16.109Z]  Total                       :               5971.26      23.33       0.00     0.00   21370.33    4466.16   44467.39
00:07:47.533       8137.00 IOPS,    31.79 MiB/s
00:07:47.533                                                                                                  Latency(us)
00:07:47.533  
[2024-12-09T02:58:16.109Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:47.533  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:07:47.533  	 Nvme1n1             :       1.01    8187.47      31.98       0.00     0.00   15552.06    8155.59   25437.68
00:07:47.533  
[2024-12-09T02:58:16.109Z]  ===================================================================================================================
00:07:47.533  
[2024-12-09T02:58:16.109Z]  Total                       :               8187.47      31.98       0.00     0.00   15552.06    8155.59   25437.68
00:07:47.533   03:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 142578
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 142581
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 142583
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20}
00:07:47.533   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:07:47.533  rmmod nvme_tcp
00:07:47.533  rmmod nvme_fabrics
00:07:47.792  rmmod nvme_keyring
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 142545 ']'
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 142545
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 142545 ']'
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 142545
00:07:47.792    03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:47.792    03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142545
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142545'
00:07:47.792  killing process with pid 142545
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 142545
00:07:47.792   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 142545
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:48.052   03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:48.052    03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:07:49.961  
00:07:49.961  real	0m7.341s
00:07:49.961  user	0m15.989s
00:07:49.961  sys	0m3.518s
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:07:49.961  ************************************
00:07:49.961  END TEST nvmf_bdev_io_wait
00:07:49.961  ************************************
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:07:49.961  ************************************
00:07:49.961  START TEST nvmf_queue_depth
00:07:49.961  ************************************
00:07:49.961   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:07:49.961  * Looking for test storage...
00:07:49.961  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:07:49.961    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:49.961     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version
00:07:49.961     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-:
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-:
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<'
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1
00:07:50.220    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:50.220     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:50.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.221  		--rc genhtml_branch_coverage=1
00:07:50.221  		--rc genhtml_function_coverage=1
00:07:50.221  		--rc genhtml_legend=1
00:07:50.221  		--rc geninfo_all_blocks=1
00:07:50.221  		--rc geninfo_unexecuted_blocks=1
00:07:50.221  		
00:07:50.221  		'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:50.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.221  		--rc genhtml_branch_coverage=1
00:07:50.221  		--rc genhtml_function_coverage=1
00:07:50.221  		--rc genhtml_legend=1
00:07:50.221  		--rc geninfo_all_blocks=1
00:07:50.221  		--rc geninfo_unexecuted_blocks=1
00:07:50.221  		
00:07:50.221  		'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:50.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.221  		--rc genhtml_branch_coverage=1
00:07:50.221  		--rc genhtml_function_coverage=1
00:07:50.221  		--rc genhtml_legend=1
00:07:50.221  		--rc geninfo_all_blocks=1
00:07:50.221  		--rc geninfo_unexecuted_blocks=1
00:07:50.221  		
00:07:50.221  		'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:50.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.221  		--rc genhtml_branch_coverage=1
00:07:50.221  		--rc genhtml_function_coverage=1
00:07:50.221  		--rc genhtml_legend=1
00:07:50.221  		--rc geninfo_all_blocks=1
00:07:50.221  		--rc geninfo_unexecuted_blocks=1
00:07:50.221  		
00:07:50.221  		'
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:07:50.221     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:50.221     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:07:50.221     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob
00:07:50.221     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:50.221     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:50.221     03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:50.221      03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:50.221      03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:50.221      03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:50.221      03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH
00:07:50.221      03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:50.221  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:50.221    03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable
00:07:50.221   03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=()
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=()
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=()
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=()
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=()
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=()
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=()
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:07:52.753  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:07:52.753  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:52.753   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:07:52.754  Found net devices under 0000:0a:00.0: cvl_0_0
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:07:52.754  Found net devices under 0000:0a:00.1: cvl_0_1
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:07:52.754  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:52.754  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms
00:07:52.754  
00:07:52.754  --- 10.0.0.2 ping statistics ---
00:07:52.754  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:52.754  rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:07:52.754  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:52.754  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms
00:07:52.754  
00:07:52.754  --- 10.0.0.1 ping statistics ---
00:07:52.754  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:52.754  rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=144806
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 144806
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 144806 ']'
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:52.754  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:52.754   03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:52.754  [2024-12-09 03:58:21.019736] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:52.754  [2024-12-09 03:58:21.019820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:52.754  [2024-12-09 03:58:21.095053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:52.754  [2024-12-09 03:58:21.149944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:52.754  [2024-12-09 03:58:21.150000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:52.754  [2024-12-09 03:58:21.150028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:07:52.754  [2024-12-09 03:58:21.150039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:07:52.754  [2024-12-09 03:58:21.150049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:52.754  [2024-12-09 03:58:21.150703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:52.754  [2024-12-09 03:58:21.293036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:52.754  Malloc0
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:52.754   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:53.014  [2024-12-09 03:58:21.339937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=144897
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 144897 /var/tmp/bdevperf.sock
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 144897 ']'
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:07:53.014  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:53.014   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:53.014  [2024-12-09 03:58:21.386121] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:07:53.014  [2024-12-09 03:58:21.386199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144897 ]
00:07:53.014  [2024-12-09 03:58:21.452395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:53.014  [2024-12-09 03:58:21.508981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:53.272   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:53.272   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:07:53.272   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:07:53.272   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:53.272   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:07:53.531  NVMe0n1
00:07:53.531   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:53.531   03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:07:53.531  Running I/O for 10 seconds...
00:07:55.841       8192.00 IOPS,    32.00 MiB/s
[2024-12-09T02:58:25.361Z]      8496.50 IOPS,    33.19 MiB/s
[2024-12-09T02:58:26.299Z]      8533.33 IOPS,    33.33 MiB/s
[2024-12-09T02:58:27.233Z]      8687.75 IOPS,    33.94 MiB/s
[2024-12-09T02:58:28.165Z]      8667.80 IOPS,    33.86 MiB/s
[2024-12-09T02:58:29.100Z]      8701.67 IOPS,    33.99 MiB/s
[2024-12-09T02:58:30.478Z]      8759.43 IOPS,    34.22 MiB/s
[2024-12-09T02:58:31.415Z]      8792.00 IOPS,    34.34 MiB/s
[2024-12-09T02:58:32.352Z]      8788.89 IOPS,    34.33 MiB/s
[2024-12-09T02:58:32.352Z]      8800.70 IOPS,    34.38 MiB/s
00:08:03.776                                                                                                  Latency(us)
00:08:03.776  
[2024-12-09T02:58:32.352Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:08:03.776  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:08:03.776  	 Verification LBA range: start 0x0 length 0x4000
00:08:03.776  	 NVMe0n1             :      10.07    8845.46      34.55       0.00     0.00  115299.80   10534.31   71846.87
00:08:03.776  
[2024-12-09T02:58:32.352Z]  ===================================================================================================================
00:08:03.776  
[2024-12-09T02:58:32.352Z]  Total                       :               8845.46      34.55       0.00     0.00  115299.80   10534.31   71846.87
00:08:03.776  {
00:08:03.776    "results": [
00:08:03.776      {
00:08:03.776        "job": "NVMe0n1",
00:08:03.776        "core_mask": "0x1",
00:08:03.776        "workload": "verify",
00:08:03.776        "status": "finished",
00:08:03.776        "verify_range": {
00:08:03.776          "start": 0,
00:08:03.776          "length": 16384
00:08:03.776        },
00:08:03.776        "queue_depth": 1024,
00:08:03.776        "io_size": 4096,
00:08:03.776        "runtime": 10.065162,
00:08:03.776        "iops": 8845.461205691474,
00:08:03.776        "mibps": 34.55258283473232,
00:08:03.776        "io_failed": 0,
00:08:03.776        "io_timeout": 0,
00:08:03.776        "avg_latency_us": 115299.79550686674,
00:08:03.776        "min_latency_us": 10534.305185185185,
00:08:03.776        "max_latency_us": 71846.87407407408
00:08:03.776      }
00:08:03.776    ],
00:08:03.776    "core_count": 1
00:08:03.776  }
00:08:03.776   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 144897
00:08:03.776   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 144897 ']'
00:08:03.776   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 144897
00:08:03.777    03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:08:03.777   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:03.777    03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144897
00:08:03.777   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:03.777   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:03.777   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144897'
00:08:03.777  killing process with pid 144897
00:08:03.777   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 144897
00:08:03.777  Received shutdown signal, test time was about 10.000000 seconds
00:08:03.777  
00:08:03.777                                                                                                  Latency(us)
00:08:03.777  
[2024-12-09T02:58:32.353Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:08:03.777  
[2024-12-09T02:58:32.353Z]  ===================================================================================================================
00:08:03.777  
[2024-12-09T02:58:32.353Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:08:03.777   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 144897
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:04.035  rmmod nvme_tcp
00:08:04.035  rmmod nvme_fabrics
00:08:04.035  rmmod nvme_keyring
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 144806 ']'
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 144806
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 144806 ']'
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 144806
00:08:04.035    03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:08:04.035   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:04.036    03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144806
00:08:04.036   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:08:04.036   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:08:04.036   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144806'
00:08:04.036  killing process with pid 144806
00:08:04.036   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 144806
00:08:04.036   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 144806
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns
00:08:04.295   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:04.296   03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:04.296    03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:08:06.841  
00:08:06.841  real	0m16.310s
00:08:06.841  user	0m22.930s
00:08:06.841  sys	0m3.166s
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:08:06.841  ************************************
00:08:06.841  END TEST nvmf_queue_depth
00:08:06.841  ************************************
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:06.841  ************************************
00:08:06.841  START TEST nvmf_target_multipath
00:08:06.841  ************************************
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:08:06.841  * Looking for test storage...
00:08:06.841  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:06.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.841  		--rc genhtml_branch_coverage=1
00:08:06.841  		--rc genhtml_function_coverage=1
00:08:06.841  		--rc genhtml_legend=1
00:08:06.841  		--rc geninfo_all_blocks=1
00:08:06.841  		--rc geninfo_unexecuted_blocks=1
00:08:06.841  		
00:08:06.841  		'
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:06.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.841  		--rc genhtml_branch_coverage=1
00:08:06.841  		--rc genhtml_function_coverage=1
00:08:06.841  		--rc genhtml_legend=1
00:08:06.841  		--rc geninfo_all_blocks=1
00:08:06.841  		--rc geninfo_unexecuted_blocks=1
00:08:06.841  		
00:08:06.841  		'
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:06.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.841  		--rc genhtml_branch_coverage=1
00:08:06.841  		--rc genhtml_function_coverage=1
00:08:06.841  		--rc genhtml_legend=1
00:08:06.841  		--rc geninfo_all_blocks=1
00:08:06.841  		--rc geninfo_unexecuted_blocks=1
00:08:06.841  		
00:08:06.841  		'
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:06.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.841  		--rc genhtml_branch_coverage=1
00:08:06.841  		--rc genhtml_function_coverage=1
00:08:06.841  		--rc genhtml_legend=1
00:08:06.841  		--rc geninfo_all_blocks=1
00:08:06.841  		--rc geninfo_unexecuted_blocks=1
00:08:06.841  		
00:08:06.841  		'
00:08:06.841   03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:06.841    03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:08:06.841     03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:08:06.841     03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:06.841     03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:06.841     03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:06.841      03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:06.841      03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:06.842      03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:06.842      03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH
00:08:06.842      03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:06.842  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:06.842    03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable
00:08:06.842   03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:08:08.749   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:08:08.749   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=()
00:08:08.749   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs
00:08:08.749   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=()
00:08:08.749   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=()
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=()
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=()
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=()
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=()
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:08:08.750  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:08:08.750  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:08:08.750  Found net devices under 0000:0a:00.0: cvl_0_0
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:08:08.750  Found net devices under 0000:0a:00.1: cvl_0_1
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:08:08.750   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:08:08.751  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:08:08.751  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms
00:08:08.751  
00:08:08.751  --- 10.0.0.2 ping statistics ---
00:08:08.751  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:08.751  rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:08:08.751  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:08:08.751  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms
00:08:08.751  
00:08:08.751  --- 10.0.0.1 ping statistics ---
00:08:08.751  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:08.751  rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:08:08.751   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']'
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test'
00:08:09.011  only one NIC for nvmf test
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:09.011  rmmod nvme_tcp
00:08:09.011  rmmod nvme_fabrics
00:08:09.011  rmmod nvme_keyring
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:09.011   03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:09.011    03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:08:10.920   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:10.921   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:08:10.921   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:10.921   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:10.921    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:10.921   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:08:10.921  
00:08:10.921  real	0m4.633s
00:08:10.921  user	0m0.979s
00:08:10.921  sys	0m1.656s
00:08:10.921   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:10.921   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:08:10.921  ************************************
00:08:10.921  END TEST nvmf_target_multipath
00:08:10.921  ************************************
00:08:11.180   03:58:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:08:11.180   03:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:11.180   03:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:11.180   03:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:11.180  ************************************
00:08:11.180  START TEST nvmf_zcopy
00:08:11.180  ************************************
00:08:11.180   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:08:11.180  * Looking for test storage...
00:08:11.180  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-:
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-:
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<'
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:11.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.180  		--rc genhtml_branch_coverage=1
00:08:11.180  		--rc genhtml_function_coverage=1
00:08:11.180  		--rc genhtml_legend=1
00:08:11.180  		--rc geninfo_all_blocks=1
00:08:11.180  		--rc geninfo_unexecuted_blocks=1
00:08:11.180  		
00:08:11.180  		'
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:11.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.180  		--rc genhtml_branch_coverage=1
00:08:11.180  		--rc genhtml_function_coverage=1
00:08:11.180  		--rc genhtml_legend=1
00:08:11.180  		--rc geninfo_all_blocks=1
00:08:11.180  		--rc geninfo_unexecuted_blocks=1
00:08:11.180  		
00:08:11.180  		'
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:11.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.180  		--rc genhtml_branch_coverage=1
00:08:11.180  		--rc genhtml_function_coverage=1
00:08:11.180  		--rc genhtml_legend=1
00:08:11.180  		--rc geninfo_all_blocks=1
00:08:11.180  		--rc geninfo_unexecuted_blocks=1
00:08:11.180  		
00:08:11.180  		'
00:08:11.180    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:11.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.180  		--rc genhtml_branch_coverage=1
00:08:11.180  		--rc genhtml_function_coverage=1
00:08:11.180  		--rc genhtml_legend=1
00:08:11.180  		--rc geninfo_all_blocks=1
00:08:11.180  		--rc geninfo_unexecuted_blocks=1
00:08:11.180  		
00:08:11.180  		'
00:08:11.180   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:08:11.180     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:11.181     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:08:11.181     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob
00:08:11.181     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:11.181     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:11.181     03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:11.181      03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.181      03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.181      03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.181      03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH
00:08:11.181      03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:11.181  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:11.181    03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable
00:08:11.181   03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=()
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=()
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=()
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=()
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=()
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=()
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=()
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:08:13.721  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:08:13.721  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:08:13.721  Found net devices under 0000:0a:00.0: cvl_0_0
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:08:13.721  Found net devices under 0000:0a:00.1: cvl_0_1
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:08:13.721   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:08:13.722   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:08:13.722   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:08:13.722   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:08:13.722   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:08:13.722   03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:08:13.722  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:08:13.722  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms
00:08:13.722  
00:08:13.722  --- 10.0.0.2 ping statistics ---
00:08:13.722  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:13.722  rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:08:13.722  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:08:13.722  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms
00:08:13.722  
00:08:13.722  --- 10.0.0.1 ping statistics ---
00:08:13.722  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:13.722  rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=150163
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 150163
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 150163 ']'
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:13.722  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:13.722   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.722  [2024-12-09 03:58:42.215209] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:08:13.722  [2024-12-09 03:58:42.215341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:13.722  [2024-12-09 03:58:42.287864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:13.981  [2024-12-09 03:58:42.348269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:13.981  [2024-12-09 03:58:42.348355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:13.981  [2024-12-09 03:58:42.348385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:13.981  [2024-12-09 03:58:42.348397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:13.981  [2024-12-09 03:58:42.348407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:13.981  [2024-12-09 03:58:42.349017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']'
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.981  [2024-12-09 03:58:42.498483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.981  [2024-12-09 03:58:42.514693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.981  malloc0
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:13.981   03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192
00:08:13.981    03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json
00:08:13.981    03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:08:13.981    03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:08:13.981    03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:08:13.981    03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:08:13.981  {
00:08:13.981    "params": {
00:08:13.981      "name": "Nvme$subsystem",
00:08:13.981      "trtype": "$TEST_TRANSPORT",
00:08:13.981      "traddr": "$NVMF_FIRST_TARGET_IP",
00:08:13.981      "adrfam": "ipv4",
00:08:13.981      "trsvcid": "$NVMF_PORT",
00:08:13.981      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:08:13.981      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:08:13.981      "hdgst": ${hdgst:-false},
00:08:13.981      "ddgst": ${ddgst:-false}
00:08:13.981    },
00:08:13.981    "method": "bdev_nvme_attach_controller"
00:08:13.981  }
00:08:13.981  EOF
00:08:13.981  )")
00:08:13.981     03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:08:13.982    03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:08:13.982     03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:08:14.241     03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:08:14.241    "params": {
00:08:14.241      "name": "Nvme1",
00:08:14.241      "trtype": "tcp",
00:08:14.241      "traddr": "10.0.0.2",
00:08:14.241      "adrfam": "ipv4",
00:08:14.241      "trsvcid": "4420",
00:08:14.241      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:08:14.241      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:08:14.241      "hdgst": false,
00:08:14.241      "ddgst": false
00:08:14.241    },
00:08:14.241    "method": "bdev_nvme_attach_controller"
00:08:14.241  }'
00:08:14.241  [2024-12-09 03:58:42.602489] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:08:14.241  [2024-12-09 03:58:42.602582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150189 ]
00:08:14.241  [2024-12-09 03:58:42.674887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:14.242  [2024-12-09 03:58:42.733317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:14.500  Running I/O for 10 seconds...
00:08:16.370       5944.00 IOPS,    46.44 MiB/s
[2024-12-09T02:58:46.331Z]      5934.00 IOPS,    46.36 MiB/s
[2024-12-09T02:58:47.264Z]      5930.67 IOPS,    46.33 MiB/s
[2024-12-09T02:58:48.199Z]      5937.00 IOPS,    46.38 MiB/s
[2024-12-09T02:58:49.184Z]      5934.00 IOPS,    46.36 MiB/s
[2024-12-09T02:58:50.132Z]      5930.33 IOPS,    46.33 MiB/s
[2024-12-09T02:58:51.064Z]      5939.14 IOPS,    46.40 MiB/s
[2024-12-09T02:58:52.000Z]      5937.38 IOPS,    46.39 MiB/s
[2024-12-09T02:58:53.377Z]      5943.22 IOPS,    46.43 MiB/s
[2024-12-09T02:58:53.377Z]      5948.30 IOPS,    46.47 MiB/s
00:08:24.801                                                                                                  Latency(us)
00:08:24.801  
[2024-12-09T02:58:53.377Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:08:24.801  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192)
00:08:24.801  	 Verification LBA range: start 0x0 length 0x1000
00:08:24.801  	 Nvme1n1             :      10.02    5950.28      46.49       0.00     0.00   21453.21    3058.35   29515.47
00:08:24.801  
[2024-12-09T02:58:53.377Z]  ===================================================================================================================
00:08:24.801  
[2024-12-09T02:58:53.377Z]  Total                       :               5950.28      46.49       0.00     0.00   21453.21    3058.35   29515.47
00:08:24.801   03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=151466
00:08:24.801   03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable
00:08:24.801   03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:24.801   03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192
00:08:24.801    03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json
00:08:24.801    03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:08:24.801    03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:08:24.801    03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:08:24.801    03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:08:24.801  {
00:08:24.801    "params": {
00:08:24.801      "name": "Nvme$subsystem",
00:08:24.801      "trtype": "$TEST_TRANSPORT",
00:08:24.801      "traddr": "$NVMF_FIRST_TARGET_IP",
00:08:24.801      "adrfam": "ipv4",
00:08:24.801      "trsvcid": "$NVMF_PORT",
00:08:24.801      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:08:24.801      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:08:24.801      "hdgst": ${hdgst:-false},
00:08:24.801      "ddgst": ${ddgst:-false}
00:08:24.801    },
00:08:24.801    "method": "bdev_nvme_attach_controller"
00:08:24.801  }
00:08:24.801  EOF
00:08:24.801  )")
00:08:24.801     03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:08:24.801  [2024-12-09 03:58:53.200569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.801  [2024-12-09 03:58:53.200611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.801    03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:08:24.801     03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:08:24.801     03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:08:24.801    "params": {
00:08:24.801      "name": "Nvme1",
00:08:24.801      "trtype": "tcp",
00:08:24.801      "traddr": "10.0.0.2",
00:08:24.801      "adrfam": "ipv4",
00:08:24.801      "trsvcid": "4420",
00:08:24.801      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:08:24.801      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:08:24.801      "hdgst": false,
00:08:24.801      "ddgst": false
00:08:24.801    },
00:08:24.801    "method": "bdev_nvme_attach_controller"
00:08:24.801  }'
00:08:24.801  [2024-12-09 03:58:53.208512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.801  [2024-12-09 03:58:53.208537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.216528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.216550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.224549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.224594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.232589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.232610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.238572] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:08:24.802  [2024-12-09 03:58:53.238644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151466 ]
00:08:24.802  [2024-12-09 03:58:53.240608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.240642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.248644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.248664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.256644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.256663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.264667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.264686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.272692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.272713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.280708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.280728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.288719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.288739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.296742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.296761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.304765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.304784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.308602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:24.802  [2024-12-09 03:58:53.312785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.312804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.320844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.320882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.328840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.328865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.336849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.336868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.344870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.344889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.352892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.352912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.360911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.360939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.368936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.368956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:24.802  [2024-12-09 03:58:53.369212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:24.802  [2024-12-09 03:58:53.376966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:24.802  [2024-12-09 03:58:53.376988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.061  [2024-12-09 03:58:53.385012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.061  [2024-12-09 03:58:53.385043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.061  [2024-12-09 03:58:53.393033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.061  [2024-12-09 03:58:53.393069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.401055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.401091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.409073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.409111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.417100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.417139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.425120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.425156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.433137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.433174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.441134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.441155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.449191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.449229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.457207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.457244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.465202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.465225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.473219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.473238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.481243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.481288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.489296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.489347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.497333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.497356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.505355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.505386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.513369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.513392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.521372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.521407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.529392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.529413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.537423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.537443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.545444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.545464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.553468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.553491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.561495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.561518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.569515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.569537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.577539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.577573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.585574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.585594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.593595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.593628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.601611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.601634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.609639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.609659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.617663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.617682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.625670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.625689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.062  [2024-12-09 03:58:53.633726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.062  [2024-12-09 03:58:53.633747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.641728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.641747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.649754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.649775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.657774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.657799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.665796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.665816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.673820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.673840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.681842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.681861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.689870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.689891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.697892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.697912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.705920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.705944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.713937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.713958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  Running I/O for 5 seconds...
00:08:25.321  [2024-12-09 03:58:53.721958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.721979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.736039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.736068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.749105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.749133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.759331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.759359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.769795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.769823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.780761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.780788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.791889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.791916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.802239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.802266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.812576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.812603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.823709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.823736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.836139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.836167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.846167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.846206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.856703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.856730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.867220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.867247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.877986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.878013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.321  [2024-12-09 03:58:53.890030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.321  [2024-12-09 03:58:53.890057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.899460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.899487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.910492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.910519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.922987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.923014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.933104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.933132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.943554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.943581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.954074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.954100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.964521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.964547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.975248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.975283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.986202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.986229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:53.998547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:53.998574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:54.008080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:54.008107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:54.020708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:54.020735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.580  [2024-12-09 03:58:54.030657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.580  [2024-12-09 03:58:54.030684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.040900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.040927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.051387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.051423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.061859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.061886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.072073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.072100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.082391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.082418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.092870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.092897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.103124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.103151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.113524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.113552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.125943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.125971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.136471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.136498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.581  [2024-12-09 03:58:54.147153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.581  [2024-12-09 03:58:54.147179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.160638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.160666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.172148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.172175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.180857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.180884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.192405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.192431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.202832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.202859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.213150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.213177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.223950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.223976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.234904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.234931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.247761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.247789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.258085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.258112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.268284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.268311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.278987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.279015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.291388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.291416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.301222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.301250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.311683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.311710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.322039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.322066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.332511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.332538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.343370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.343397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.355928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.355956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.365696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.365723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.375831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.375857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.386061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.386087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.396456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.396484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:25.840  [2024-12-09 03:58:54.406488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:25.840  [2024-12-09 03:58:54.406515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.417129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.417156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.427820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.427847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.438415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.438442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.452050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.452077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.462200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.462228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.472697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.472725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.482959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.482985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.493229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.493256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.503772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.503800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.516467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.516494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.525524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.525551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.538291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.538317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.548598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.548625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.559197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.559224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.569595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.569622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.579931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.579959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.590330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.590358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.600559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.600586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.610729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.610770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.621104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.621145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.631643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.099  [2024-12-09 03:58:54.631669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.099  [2024-12-09 03:58:54.642505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.100  [2024-12-09 03:58:54.642532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.100  [2024-12-09 03:58:54.653187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.100  [2024-12-09 03:58:54.653215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.100  [2024-12-09 03:58:54.664363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.100  [2024-12-09 03:58:54.664392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358  [2024-12-09 03:58:54.676862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.676889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358  [2024-12-09 03:58:54.686937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.686966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358  [2024-12-09 03:58:54.697636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.697663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358  [2024-12-09 03:58:54.708165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.708191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358  [2024-12-09 03:58:54.718547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.718574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358      11957.00 IOPS,    93.41 MiB/s
[2024-12-09T02:58:54.934Z] [2024-12-09 03:58:54.729056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.729083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358  [2024-12-09 03:58:54.739922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.739949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.358  [2024-12-09 03:58:54.750385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.358  [2024-12-09 03:58:54.750412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.761074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.761101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.771854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.771881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.782655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.782682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.792883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.792910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.803164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.803190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.814035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.814061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.824616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.824642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.836720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.836747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.846396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.846422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.857339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.857375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.867804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.867831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.878053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.878080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.888769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.888798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.901013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.901040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.912476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.912502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.921697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.921724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.359  [2024-12-09 03:58:54.933070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.359  [2024-12-09 03:58:54.933098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:54.945450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:54.945477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:54.955420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:54.955447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:54.965918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:54.965944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:54.976378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:54.976406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:54.986960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:54.986987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:54.997313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:54.997340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.007938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.007965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.018440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.018467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.029157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.029183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.039451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.039478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.050036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.050063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.060503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.060541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.070956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.070984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.081278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.081305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.091678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.091705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.102112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.102139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.112841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.112868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.125254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.125291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.136800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.136827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.145863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.145890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.157356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.157382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.169384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.169411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.179032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.179058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.617  [2024-12-09 03:58:55.189474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.617  [2024-12-09 03:58:55.189501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.201613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.201640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.211709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.211736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.222196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.222222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.232668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.232695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.243493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.243520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.255785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.255811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.265455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.265492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.275882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.275909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.286305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.286332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.296638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.296665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.306896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.306925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.317335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.317362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.327880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.327907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.338507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.338534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.349268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.349306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.359605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.359640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.371874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.371903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.381479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.381507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.391884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.391912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.402803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.402830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.415051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.415078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.425356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.425383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.435808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.435836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.446024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.446051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:26.880  [2024-12-09 03:58:55.456201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:26.880  [2024-12-09 03:58:55.456228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.466413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.466449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.477141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.477169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.489529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.489556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.498961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.498987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.509343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.509370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.519519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.519547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.529993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.530021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.540303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.540330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.550331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.550357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.560890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.560916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.571359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.571386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.582146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.582172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.594509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.594536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.606457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.606484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.615595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.615622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.626797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.626824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.639201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.639228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.649363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.649390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.659712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.659739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.669942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.669970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.680002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.680030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.690223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.690252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.700240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.700267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.138  [2024-12-09 03:58:55.710611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.138  [2024-12-09 03:58:55.710639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.721146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.721173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396      12070.50 IOPS,    94.30 MiB/s
[2024-12-09T02:58:55.972Z] [2024-12-09 03:58:55.731691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.731718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.742861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.742888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.755057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.755084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.764992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.765019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.775770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.775797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.786347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.786374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.796802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.796829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.807129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.807156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.818055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.818082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.830743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.830771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.840782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.840808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.851045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.851072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.861438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.861466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.871980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.872007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.882595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.882622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.893235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.893261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.903952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.903978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.914359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.914386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.926738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.926765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.936710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.936737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.946901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.946928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.957755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.957781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.396  [2024-12-09 03:58:55.970248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.396  [2024-12-09 03:58:55.970282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:55.979905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:55.979932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:55.990174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:55.990201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.000535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:56.000562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.011065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:56.011093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.023566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:56.023594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.032984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:56.033011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.044051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:56.044078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.054620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:56.054647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.065001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.654  [2024-12-09 03:58:56.065037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.654  [2024-12-09 03:58:56.075107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.075133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.085459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.085486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.096097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.096124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.106399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.106427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.116926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.116953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.129523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.129550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.140643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.140670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.149442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.149469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.160982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.161009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.173696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.173723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.183822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.183849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.194108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.194135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.204602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.204628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.215040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.215067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.655  [2024-12-09 03:58:56.225432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.655  [2024-12-09 03:58:56.225458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.235732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.235774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.246820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.246847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.257170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.257198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.267571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.267606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.277842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.277868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.288005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.288031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.298571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.298597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.310833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.310860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.320050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.320078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.330351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.330377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.341017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.341044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.353584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.353610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.363800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.363827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.373907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.373935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.384499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.384526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.395021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.395048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.405404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.405431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.415709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.415736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.426404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.426431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.436529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.436556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.446805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.446832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.457317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.457343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.467682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.467717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.478465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.478492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:27.913  [2024-12-09 03:58:56.488954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:27.913  [2024-12-09 03:58:56.488983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.499610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.499637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.510253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.510290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.520636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.520664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.530893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.530930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.541669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.541696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.552479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.552506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.563715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.563742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.577094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.577122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.587627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.587654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.597975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.598002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.608610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.608638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.619119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.619146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.629969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.629997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.643782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.643809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.654403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.654432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.664836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.664863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.675418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.675452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.685832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.685859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.696332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.696359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.708541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.708569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.718593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.718620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172      12087.33 IOPS,    94.43 MiB/s
[2024-12-09T02:58:56.748Z] [2024-12-09 03:58:56.729101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.729127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.172  [2024-12-09 03:58:56.739785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.172  [2024-12-09 03:58:56.739811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.750015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.750042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.760668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.760695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.771360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.771387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.785599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.785626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.796323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.796350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.806789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.806816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.817328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.817355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.827940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.827968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.840508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.840536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.851012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.851048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.861208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.861235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.872014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.872041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.884174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.884201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.893808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.893835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.904460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.904488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.916655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.916682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.926257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.926294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.937235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.937262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.949803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.949831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.959762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.959789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.970386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.970413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.981158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.981185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:56.993586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:56.993612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.431  [2024-12-09 03:58:57.002788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.431  [2024-12-09 03:58:57.002815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.015621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.015648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.027577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.027603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.037421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.037448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.047597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.047624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.058229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.058256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.068677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.068703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.079061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.079088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.089499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.089526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.100126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.100153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.110409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.110435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.120862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.120889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.131560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.131587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.144071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.144113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.154218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.154244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.164808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.164835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.176892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.176919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.186441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.186468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.196454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.196481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.207157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.207184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.217807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.217834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.228315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.228342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.241752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.241779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.253383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.253409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.690  [2024-12-09 03:58:57.262748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.690  [2024-12-09 03:58:57.262776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.273436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.273463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.285747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.285773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.295987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.296013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.306568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.306595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.316745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.316771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.326946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.326973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.337449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.337476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.348049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.948  [2024-12-09 03:58:57.348075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.948  [2024-12-09 03:58:57.358680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.358707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.370914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.370941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.379838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.379865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.391041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.391068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.401791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.401819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.412459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.412487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.424794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.424821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.435054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.435081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.445881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.445909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.458816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.458844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.469063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.469089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.479781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.479808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.493087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.493124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.505707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.505734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:28.949  [2024-12-09 03:58:57.515637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:28.949  [2024-12-09 03:58:57.515665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.526287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.526313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.537017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.537044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.549077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.549104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.558729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.558757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.569421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.569449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.579430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.579457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.592048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.592075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.602167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.602194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.612596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.612623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.622966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.622993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.633513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.633540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.644059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.644087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.654501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.654529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.664736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.664764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.675122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.675150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.685359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.685386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.695562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.695598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.706095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.706122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.716471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.716498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.727220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.727248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208      12090.25 IOPS,    94.46 MiB/s
[2024-12-09T02:58:57.784Z] [2024-12-09 03:58:57.738106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.738142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.748592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.748620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.760897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.760924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.770804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.770831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.208  [2024-12-09 03:58:57.781043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.208  [2024-12-09 03:58:57.781070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.791635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.791662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.802064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.802090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.812591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.812618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.823064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.823090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.833314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.833341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.844153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.844181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.854589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.854617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.865075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.865102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.875565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.875592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.886341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.886369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.898972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.899010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.909447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.909474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.920004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.920030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.931910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.931937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.941184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.941211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.951886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.951914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.962628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.962656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.973105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.973133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.985549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.985576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:57.995541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:57.995572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:58.006173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:58.006200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:58.016925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:58.016952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:58.027868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:58.027896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.467  [2024-12-09 03:58:58.040678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.467  [2024-12-09 03:58:58.040705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.052468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.052495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.061070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.061097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.072786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.072813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.083573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.083600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.094385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.094412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.107941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.107968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.118221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.118248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.128789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.128816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.141180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.141207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.150712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.150738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.161331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.161357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.172414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.172441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.183105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.183132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.193374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.193401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.204062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.204089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.214734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.214761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.225167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.225194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.235586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.235613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.245861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.245887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.256203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.256229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.266399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.266426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.277155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.277183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.289907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.289934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.301780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.301809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.311108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.311150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:29.747  [2024-12-09 03:58:58.322731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:29.747  [2024-12-09 03:58:58.322758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.333180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.333207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.343823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.343849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.356968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.356995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.367325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.367353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.377743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.377769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.388144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.388171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.398393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.398420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.408493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.408521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.418748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.418774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.429417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.429444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.439710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.439737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.449743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.449783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.460376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.460404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.471397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.471424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.482262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.482299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.495598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.495625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.507322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.507348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.516556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.516583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.528255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.528290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.540162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.540189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.549594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.549621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.560746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.560772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.006  [2024-12-09 03:58:58.573339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.006  [2024-12-09 03:58:58.573367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.583431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.583458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.593896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.593923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.604200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.604227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.614601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.614627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.625085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.625113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.635547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.635573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.645929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.645956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.656314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.656341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.666689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.666716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.677207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.677233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.687709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.687736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.265  [2024-12-09 03:58:58.700737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.265  [2024-12-09 03:58:58.700764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.710877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.710904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.721303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.721330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266      12089.60 IOPS,    94.45 MiB/s
[2024-12-09T02:58:58.842Z] [2024-12-09 03:58:58.731577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.731616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.739642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.739668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  
00:08:30.266                                                                                                  Latency(us)
00:08:30.266  
[2024-12-09T02:58:58.842Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:08:30.266  Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192)
00:08:30.266  	 Nvme1n1             :       5.01   12091.00      94.46       0.00     0.00   10573.47    4636.07   21068.61
00:08:30.266  
[2024-12-09T02:58:58.842Z]  ===================================================================================================================
00:08:30.266  
[2024-12-09T02:58:58.842Z]  Total                       :              12091.00      94.46       0.00     0.00   10573.47    4636.07   21068.61
00:08:30.266  [2024-12-09 03:58:58.746386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.746411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.754401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.754425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.762415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.762446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.770479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.770523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.778516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.778574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.786538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.786581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.794549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.794595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.802565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.802611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.810598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.810644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.818617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.818661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.826645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.826691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.266  [2024-12-09 03:58:58.834666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.266  [2024-12-09 03:58:58.834715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.842688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.842744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.850716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.850762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.858732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.858778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.866749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.866796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.874770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.874818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.882727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.882747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.890746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.890765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.898768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.898787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.906790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.906809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.914843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.914877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.922898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.922947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.930922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.930973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.938881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.938900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.946900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.946919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  [2024-12-09 03:58:58.954924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:08:30.525  [2024-12-09 03:58:58.954943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:30.525  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (151466) - No such process
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 151466
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:30.525  delay0
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:30.525   03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1'
00:08:30.784  [2024-12-09 03:58:59.118455] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:08:37.344  [2024-12-09 03:59:05.588973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120cc30 is same with the state(6) to be set
00:08:37.344  Initializing NVMe Controllers
00:08:37.344  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:08:37.344  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:08:37.344  Initialization complete. Launching workers.
00:08:37.344  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2363
00:08:37.344  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2650, failed to submit 33
00:08:37.344  	 success 2511, unsuccessful 139, failed 0
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:37.344  rmmod nvme_tcp
00:08:37.344  rmmod nvme_fabrics
00:08:37.344  rmmod nvme_keyring
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 150163 ']'
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 150163
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 150163 ']'
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 150163
00:08:37.344    03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:37.344    03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150163
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150163'
00:08:37.344  killing process with pid 150163
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 150163
00:08:37.344   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 150163
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:37.604   03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:37.604    03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:39.515   03:59:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:08:39.515  
00:08:39.515  real	0m28.463s
00:08:39.515  user	0m42.697s
00:08:39.515  sys	0m7.595s
00:08:39.515   03:59:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:39.515   03:59:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:08:39.515  ************************************
00:08:39.515  END TEST nvmf_zcopy
00:08:39.515  ************************************
00:08:39.515   03:59:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:08:39.515   03:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:39.515   03:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:39.515   03:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:39.515  ************************************
00:08:39.515  START TEST nvmf_nmic
00:08:39.515  ************************************
00:08:39.515   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:08:39.515  * Looking for test storage...
00:08:39.515  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:08:39.515    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:39.515     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version
00:08:39.515     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-:
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-:
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<'
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:39.774     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:39.774    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:39.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:39.775  		--rc genhtml_branch_coverage=1
00:08:39.775  		--rc genhtml_function_coverage=1
00:08:39.775  		--rc genhtml_legend=1
00:08:39.775  		--rc geninfo_all_blocks=1
00:08:39.775  		--rc geninfo_unexecuted_blocks=1
00:08:39.775  		
00:08:39.775  		'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:39.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:39.775  		--rc genhtml_branch_coverage=1
00:08:39.775  		--rc genhtml_function_coverage=1
00:08:39.775  		--rc genhtml_legend=1
00:08:39.775  		--rc geninfo_all_blocks=1
00:08:39.775  		--rc geninfo_unexecuted_blocks=1
00:08:39.775  		
00:08:39.775  		'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:39.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:39.775  		--rc genhtml_branch_coverage=1
00:08:39.775  		--rc genhtml_function_coverage=1
00:08:39.775  		--rc genhtml_legend=1
00:08:39.775  		--rc geninfo_all_blocks=1
00:08:39.775  		--rc geninfo_unexecuted_blocks=1
00:08:39.775  		
00:08:39.775  		'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:39.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:39.775  		--rc genhtml_branch_coverage=1
00:08:39.775  		--rc genhtml_function_coverage=1
00:08:39.775  		--rc genhtml_legend=1
00:08:39.775  		--rc geninfo_all_blocks=1
00:08:39.775  		--rc geninfo_unexecuted_blocks=1
00:08:39.775  		
00:08:39.775  		'
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:08:39.775     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:39.775     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:08:39.775     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob
00:08:39.775     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:39.775     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:39.775     03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:39.775      03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:39.775      03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:39.775      03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:39.775      03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH
00:08:39.775      03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:39.775  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:39.775    03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable
00:08:39.775   03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=()
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=()
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=()
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=()
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=()
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=()
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=()
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:08:42.308  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:08:42.308  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:08:42.308  Found net devices under 0000:0a:00.0: cvl_0_0
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:08:42.308  Found net devices under 0000:0a:00.1: cvl_0_1
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:08:42.308  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:08:42.308  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms
00:08:42.308  
00:08:42.308  --- 10.0.0.2 ping statistics ---
00:08:42.308  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:42.308  rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms
00:08:42.308   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:08:42.308  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:08:42.308  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms
00:08:42.308  
00:08:42.308  --- 10.0.0.1 ping statistics ---
00:08:42.309  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:42.309  rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=154906
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 154906
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 154906 ']'
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:42.309  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.309  [2024-12-09 03:59:10.587953] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:08:42.309  [2024-12-09 03:59:10.588049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:42.309  [2024-12-09 03:59:10.664322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:42.309  [2024-12-09 03:59:10.727317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:42.309  [2024-12-09 03:59:10.727372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:42.309  [2024-12-09 03:59:10.727386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:42.309  [2024-12-09 03:59:10.727398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:42.309  [2024-12-09 03:59:10.727408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:42.309  [2024-12-09 03:59:10.728976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:42.309  [2024-12-09 03:59:10.729003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:42.309  [2024-12-09 03:59:10.729061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:42.309  [2024-12-09 03:59:10.729064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.309   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.309  [2024-12-09 03:59:10.880537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.567  Malloc0
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.567  [2024-12-09 03:59:10.953564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:08:42.567  test case1: single bdev can't be used in multiple subsystems
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:08:42.567   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.568  [2024-12-09 03:59:10.977344] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:08:42.568  [2024-12-09 03:59:10.977375] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:08:42.568  [2024-12-09 03:59:10.977391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:08:42.568  request:
00:08:42.568  {
00:08:42.568  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:08:42.568  "namespace": {
00:08:42.568  "bdev_name": "Malloc0",
00:08:42.568  "no_auto_visible": false,
00:08:42.568  "hide_metadata": false
00:08:42.568  },
00:08:42.568  "method": "nvmf_subsystem_add_ns",
00:08:42.568  "req_id": 1
00:08:42.568  }
00:08:42.568  Got JSON-RPC error response
00:08:42.568  response:
00:08:42.568  {
00:08:42.568  "code": -32602,
00:08:42.568  "message": "Invalid parameters"
00:08:42.568  }
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:08:42.568   Adding namespace failed - expected result.
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:08:42.568  test case2: host connect to nvmf target in multiple paths
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:42.568  [2024-12-09 03:59:10.985462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.568   03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:08:43.136   03:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421
00:08:43.736   03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:08:43.736   03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0
00:08:43.736   03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:08:43.736   03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:08:43.736   03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2
00:08:46.263   03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:08:46.263    03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:08:46.263    03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:08:46.263   03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:08:46.263   03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:08:46.263   03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0
00:08:46.263   03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:08:46.263  [global]
00:08:46.263  thread=1
00:08:46.263  invalidate=1
00:08:46.263  rw=write
00:08:46.263  time_based=1
00:08:46.263  runtime=1
00:08:46.263  ioengine=libaio
00:08:46.263  direct=1
00:08:46.263  bs=4096
00:08:46.263  iodepth=1
00:08:46.263  norandommap=0
00:08:46.263  numjobs=1
00:08:46.263  
00:08:46.263  verify_dump=1
00:08:46.263  verify_backlog=512
00:08:46.263  verify_state_save=0
00:08:46.263  do_verify=1
00:08:46.263  verify=crc32c-intel
00:08:46.263  [job0]
00:08:46.263  filename=/dev/nvme0n1
00:08:46.263  Could not set queue depth (nvme0n1)
00:08:46.263  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:08:46.263  fio-3.35
00:08:46.263  Starting 1 thread
00:08:47.635  
00:08:47.635  job0: (groupid=0, jobs=1): err= 0: pid=155428: Mon Dec  9 03:59:15 2024
00:08:47.635    read: IOPS=55, BW=222KiB/s (228kB/s)(228KiB/1026msec)
00:08:47.635      slat (nsec): min=5795, max=33405, avg=15533.07, stdev=9950.79
00:08:47.635      clat (usec): min=194, max=41049, avg=15973.19, stdev=19987.53
00:08:47.635       lat (usec): min=199, max=41065, avg=15988.73, stdev=19994.29
00:08:47.635      clat percentiles (usec):
00:08:47.635       |  1.00th=[  194],  5.00th=[  215], 10.00th=[  237], 20.00th=[  262],
00:08:47.635       | 30.00th=[  273], 40.00th=[  277], 50.00th=[  297], 60.00th=[  326],
00:08:47.635       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:08:47.635       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:08:47.635       | 99.99th=[41157]
00:08:47.635    write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets
00:08:47.635      slat (usec): min=7, max=27186, avg=65.49, stdev=1200.95
00:08:47.635      clat (usec): min=127, max=230, avg=154.41, stdev=14.17
00:08:47.635       lat (usec): min=137, max=27398, avg=219.90, stdev=1203.61
00:08:47.635      clat percentiles (usec):
00:08:47.635       |  1.00th=[  133],  5.00th=[  137], 10.00th=[  139], 20.00th=[  143],
00:08:47.635       | 30.00th=[  145], 40.00th=[  149], 50.00th=[  153], 60.00th=[  155],
00:08:47.635       | 70.00th=[  161], 80.00th=[  165], 90.00th=[  174], 95.00th=[  178],
00:08:47.635       | 99.00th=[  198], 99.50th=[  212], 99.90th=[  231], 99.95th=[  231],
00:08:47.635       | 99.99th=[  231]
00:08:47.635     bw (  KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1
00:08:47.635     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:08:47.635    lat (usec)   : 250=91.92%, 500=4.22%
00:08:47.635    lat (msec)   : 50=3.87%
00:08:47.635    cpu          : usr=0.68%, sys=0.68%, ctx=571, majf=0, minf=1
00:08:47.635    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:08:47.635       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:08:47.635       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:08:47.635       issued rwts: total=57,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:47.635       latency   : target=0, window=0, percentile=100.00%, depth=1
00:08:47.635  
00:08:47.635  Run status group 0 (all jobs):
00:08:47.635     READ: bw=222KiB/s (228kB/s), 222KiB/s-222KiB/s (228kB/s-228kB/s), io=228KiB (233kB), run=1026-1026msec
00:08:47.635    WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec
00:08:47.635  
00:08:47.635  Disk stats (read/write):
00:08:47.635    nvme0n1: ios=105/512, merge=0/0, ticks=972/79, in_queue=1051, util=98.60%
00:08:47.635   03:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:08:47.635  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0
00:08:47.635   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:47.636  rmmod nvme_tcp
00:08:47.636  rmmod nvme_fabrics
00:08:47.636  rmmod nvme_keyring
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 154906 ']'
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 154906
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 154906 ']'
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 154906
00:08:47.636    03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:47.636    03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154906
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154906'
00:08:47.636  killing process with pid 154906
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 154906
00:08:47.636   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 154906
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:47.894   03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:47.894    03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:08:50.436  
00:08:50.436  real	0m10.439s
00:08:50.436  user	0m23.688s
00:08:50.436  sys	0m2.775s
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:08:50.436  ************************************
00:08:50.436  END TEST nvmf_nmic
00:08:50.436  ************************************
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:50.436  ************************************
00:08:50.436  START TEST nvmf_fio_target
00:08:50.436  ************************************
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp
00:08:50.436  * Looking for test storage...
00:08:50.436  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-:
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-:
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<'
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:50.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.436  		--rc genhtml_branch_coverage=1
00:08:50.436  		--rc genhtml_function_coverage=1
00:08:50.436  		--rc genhtml_legend=1
00:08:50.436  		--rc geninfo_all_blocks=1
00:08:50.436  		--rc geninfo_unexecuted_blocks=1
00:08:50.436  		
00:08:50.436  		'
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:50.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.436  		--rc genhtml_branch_coverage=1
00:08:50.436  		--rc genhtml_function_coverage=1
00:08:50.436  		--rc genhtml_legend=1
00:08:50.436  		--rc geninfo_all_blocks=1
00:08:50.436  		--rc geninfo_unexecuted_blocks=1
00:08:50.436  		
00:08:50.436  		'
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:50.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.436  		--rc genhtml_branch_coverage=1
00:08:50.436  		--rc genhtml_function_coverage=1
00:08:50.436  		--rc genhtml_legend=1
00:08:50.436  		--rc geninfo_all_blocks=1
00:08:50.436  		--rc geninfo_unexecuted_blocks=1
00:08:50.436  		
00:08:50.436  		'
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:50.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.436  		--rc genhtml_branch_coverage=1
00:08:50.436  		--rc genhtml_function_coverage=1
00:08:50.436  		--rc genhtml_legend=1
00:08:50.436  		--rc geninfo_all_blocks=1
00:08:50.436  		--rc geninfo_unexecuted_blocks=1
00:08:50.436  		
00:08:50.436  		'
00:08:50.436   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:50.436    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:50.436     03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:50.437      03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:50.437      03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:50.437      03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:50.437      03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH
00:08:50.437      03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:50.437  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:50.437    03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable
00:08:50.437   03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=()
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=()
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=()
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=()
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=()
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:08:52.343  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:08:52.343  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:08:52.343  Found net devices under 0000:0a:00.0: cvl_0_0
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:08:52.343  Found net devices under 0000:0a:00.1: cvl_0_1
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:08:52.343   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:08:52.344   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:08:52.603   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:08:52.603   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:08:52.603   03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:08:52.603  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:08:52.603  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms
00:08:52.603  
00:08:52.603  --- 10.0.0.2 ping statistics ---
00:08:52.603  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:52.603  rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:08:52.603  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:08:52.603  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms
00:08:52.603  
00:08:52.603  --- 10.0.0.1 ping statistics ---
00:08:52.603  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:52.603  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:08:52.603   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=157643
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 157643
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 157643 ']'
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:52.862  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:52.862   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:08:52.862  [2024-12-09 03:59:21.244033] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:08:52.862  [2024-12-09 03:59:21.244110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:52.862  [2024-12-09 03:59:21.313725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:52.862  [2024-12-09 03:59:21.367385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:52.862  [2024-12-09 03:59:21.367445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:52.862  [2024-12-09 03:59:21.367458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:52.862  [2024-12-09 03:59:21.367469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:52.862  [2024-12-09 03:59:21.367479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:52.862  [2024-12-09 03:59:21.369059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:52.862  [2024-12-09 03:59:21.369167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:52.862  [2024-12-09 03:59:21.369258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:52.862  [2024-12-09 03:59:21.369261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:53.120   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:53.120   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0
00:08:53.120   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:53.120   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:53.120   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:08:53.120   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:53.120   03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:08:53.376  [2024-12-09 03:59:21.811212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:53.376    03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:53.634   03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:08:53.634    03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:53.895   03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:08:53.895    03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:54.153   03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:08:54.153    03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:54.719   03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:08:54.719   03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:08:54.719    03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:54.977   03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:08:54.977    03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:55.543   03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:08:55.543    03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:55.543   03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:08:55.543   03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:08:56.108   03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:08:56.108   03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:08:56.108   03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:08:56.367   03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:08:56.367   03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:08:56.625   03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:08:56.881  [2024-12-09 03:59:25.427402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:08:56.881   03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:08:57.447   03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:08:57.447   03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:08:58.381   03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:08:58.381   03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0
00:08:58.381   03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:08:58.381   03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]]
00:08:58.381   03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4
00:08:58.381   03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2
00:09:00.280   03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:09:00.280    03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:09:00.280    03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:09:00.280   03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4
00:09:00.280   03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:09:00.280   03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0
00:09:00.280   03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:09:00.280  [global]
00:09:00.280  thread=1
00:09:00.280  invalidate=1
00:09:00.280  rw=write
00:09:00.280  time_based=1
00:09:00.280  runtime=1
00:09:00.280  ioengine=libaio
00:09:00.280  direct=1
00:09:00.280  bs=4096
00:09:00.280  iodepth=1
00:09:00.280  norandommap=0
00:09:00.280  numjobs=1
00:09:00.280  
00:09:00.280  verify_dump=1
00:09:00.280  verify_backlog=512
00:09:00.280  verify_state_save=0
00:09:00.280  do_verify=1
00:09:00.280  verify=crc32c-intel
00:09:00.280  [job0]
00:09:00.280  filename=/dev/nvme0n1
00:09:00.280  [job1]
00:09:00.280  filename=/dev/nvme0n2
00:09:00.280  [job2]
00:09:00.280  filename=/dev/nvme0n3
00:09:00.280  [job3]
00:09:00.280  filename=/dev/nvme0n4
00:09:00.280  Could not set queue depth (nvme0n1)
00:09:00.280  Could not set queue depth (nvme0n2)
00:09:00.280  Could not set queue depth (nvme0n3)
00:09:00.280  Could not set queue depth (nvme0n4)
00:09:00.538  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:00.538  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:00.538  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:00.538  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:00.538  fio-3.35
00:09:00.538  Starting 4 threads
00:09:01.911  
00:09:01.911  job0: (groupid=0, jobs=1): err= 0: pid=158711: Mon Dec  9 03:59:30 2024
00:09:01.911    read: IOPS=995, BW=3981KiB/s (4076kB/s)(4084KiB/1026msec)
00:09:01.911      slat (nsec): min=5688, max=66165, avg=11050.41, stdev=6681.32
00:09:01.911      clat (usec): min=175, max=42218, avg=778.36, stdev=4750.18
00:09:01.911       lat (usec): min=181, max=42236, avg=789.41, stdev=4751.90
00:09:01.911      clat percentiles (usec):
00:09:01.911       |  1.00th=[  182],  5.00th=[  186], 10.00th=[  190], 20.00th=[  196],
00:09:01.911       | 30.00th=[  202], 40.00th=[  208], 50.00th=[  215], 60.00th=[  223],
00:09:01.911       | 70.00th=[  229], 80.00th=[  237], 90.00th=[  249], 95.00th=[  273],
00:09:01.911       | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206],
00:09:01.911       | 99.99th=[42206]
00:09:01.911    write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets
00:09:01.911      slat (nsec): min=7428, max=54759, avg=12148.91, stdev=6946.10
00:09:01.911      clat (usec): min=134, max=314, avg=194.89, stdev=31.91
00:09:01.911       lat (usec): min=142, max=325, avg=207.04, stdev=34.23
00:09:01.911      clat percentiles (usec):
00:09:01.911       |  1.00th=[  139],  5.00th=[  145], 10.00th=[  149], 20.00th=[  163],
00:09:01.911       | 30.00th=[  182], 40.00th=[  186], 50.00th=[  194], 60.00th=[  202],
00:09:01.911       | 70.00th=[  212], 80.00th=[  225], 90.00th=[  239], 95.00th=[  247],
00:09:01.911       | 99.00th=[  262], 99.50th=[  265], 99.90th=[  273], 99.95th=[  314],
00:09:01.911       | 99.99th=[  314]
00:09:01.911     bw (  KiB/s): min= 8192, max= 8192, per=59.20%, avg=8192.00, stdev= 0.00, samples=1
00:09:01.911     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:09:01.911    lat (usec)   : 250=93.15%, 500=6.06%, 750=0.10%
00:09:01.911    lat (msec)   : 50=0.68%
00:09:01.911    cpu          : usr=1.66%, sys=3.12%, ctx=2047, majf=0, minf=1
00:09:01.911    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:01.911       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.911       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.911       issued rwts: total=1021,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:01.911       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:01.911  job1: (groupid=0, jobs=1): err= 0: pid=158713: Mon Dec  9 03:59:30 2024
00:09:01.911    read: IOPS=276, BW=1107KiB/s (1134kB/s)(1124KiB/1015msec)
00:09:01.911      slat (nsec): min=6635, max=34368, avg=11571.07, stdev=5764.34
00:09:01.911      clat (usec): min=206, max=42959, avg=3265.30, stdev=10547.38
00:09:01.911       lat (usec): min=216, max=42979, avg=3276.87, stdev=10551.01
00:09:01.911      clat percentiles (usec):
00:09:01.911       |  1.00th=[  208],  5.00th=[  219], 10.00th=[  235], 20.00th=[  247],
00:09:01.911       | 30.00th=[  253], 40.00th=[  273], 50.00th=[  289], 60.00th=[  424],
00:09:01.911       | 70.00th=[  465], 80.00th=[  506], 90.00th=[  635], 95.00th=[41157],
00:09:01.911       | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730],
00:09:01.911       | 99.99th=[42730]
00:09:01.911    write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets
00:09:01.911      slat (nsec): min=5727, max=45693, avg=11611.37, stdev=5538.97
00:09:01.911      clat (usec): min=137, max=304, avg=165.91, stdev=14.23
00:09:01.911       lat (usec): min=144, max=311, avg=177.52, stdev=15.77
00:09:01.911      clat percentiles (usec):
00:09:01.911       |  1.00th=[  141],  5.00th=[  149], 10.00th=[  151], 20.00th=[  155],
00:09:01.911       | 30.00th=[  159], 40.00th=[  163], 50.00th=[  165], 60.00th=[  167],
00:09:01.911       | 70.00th=[  172], 80.00th=[  176], 90.00th=[  182], 95.00th=[  188],
00:09:01.911       | 99.00th=[  200], 99.50th=[  221], 99.90th=[  306], 99.95th=[  306],
00:09:01.911       | 99.99th=[  306]
00:09:01.911     bw (  KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1
00:09:01.911     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:09:01.911    lat (usec)   : 250=73.77%, 500=18.28%, 750=5.30%, 1000=0.13%
00:09:01.911    lat (msec)   : 50=2.52%
00:09:01.911    cpu          : usr=0.79%, sys=0.69%, ctx=793, majf=0, minf=1
00:09:01.911    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:01.911       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.911       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.911       issued rwts: total=281,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:01.911       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:01.911  job2: (groupid=0, jobs=1): err= 0: pid=158716: Mon Dec  9 03:59:30 2024
00:09:01.911    read: IOPS=265, BW=1064KiB/s (1089kB/s)(1084KiB/1019msec)
00:09:01.911      slat (nsec): min=7641, max=43305, avg=12400.84, stdev=6770.97
00:09:01.911      clat (usec): min=223, max=42115, avg=3294.18, stdev=10480.84
00:09:01.911       lat (usec): min=233, max=42135, avg=3306.58, stdev=10482.62
00:09:01.911      clat percentiles (usec):
00:09:01.911       |  1.00th=[  227],  5.00th=[  233], 10.00th=[  241], 20.00th=[  245],
00:09:01.911       | 30.00th=[  262], 40.00th=[  334], 50.00th=[  429], 60.00th=[  457],
00:09:01.911       | 70.00th=[  478], 80.00th=[  502], 90.00th=[  562], 95.00th=[41157],
00:09:01.911       | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:09:01.911       | 99.99th=[42206]
00:09:01.911    write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets
00:09:01.911      slat (nsec): min=6763, max=61035, avg=13019.91, stdev=6172.62
00:09:01.911      clat (usec): min=172, max=326, avg=220.02, stdev=22.12
00:09:01.911       lat (usec): min=181, max=373, avg=233.04, stdev=22.27
00:09:01.911      clat percentiles (usec):
00:09:01.911       |  1.00th=[  180],  5.00th=[  188], 10.00th=[  194], 20.00th=[  200],
00:09:01.911       | 30.00th=[  206], 40.00th=[  212], 50.00th=[  219], 60.00th=[  225],
00:09:01.911       | 70.00th=[  233], 80.00th=[  239], 90.00th=[  249], 95.00th=[  258],
00:09:01.911       | 99.00th=[  269], 99.50th=[  277], 99.90th=[  326], 99.95th=[  326],
00:09:01.911       | 99.99th=[  326]
00:09:01.911     bw (  KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1
00:09:01.911     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:09:01.911    lat (usec)   : 250=67.43%, 500=25.16%, 750=4.73%, 1000=0.13%
00:09:01.911    lat (msec)   : 20=0.13%, 50=2.43%
00:09:01.911    cpu          : usr=0.79%, sys=0.88%, ctx=786, majf=0, minf=1
00:09:01.911    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:01.911       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.911       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.911       issued rwts: total=271,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:01.911       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:01.911  job3: (groupid=0, jobs=1): err= 0: pid=158717: Mon Dec  9 03:59:30 2024
00:09:01.911    read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4144KiB/1036msec)
00:09:01.911      slat (nsec): min=5750, max=38965, avg=10236.67, stdev=5467.23
00:09:01.911      clat (usec): min=194, max=41395, avg=661.58, stdev=4179.87
00:09:01.911       lat (usec): min=200, max=41413, avg=671.82, stdev=4182.04
00:09:01.911      clat percentiles (usec):
00:09:01.911       |  1.00th=[  198],  5.00th=[  202], 10.00th=[  206], 20.00th=[  210],
00:09:01.911       | 30.00th=[  215], 40.00th=[  221], 50.00th=[  225], 60.00th=[  231],
00:09:01.911       | 70.00th=[  239], 80.00th=[  247], 90.00th=[  255], 95.00th=[  265],
00:09:01.911       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:09:01.911       | 99.99th=[41157]
00:09:01.911    write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets
00:09:01.911      slat (nsec): min=7145, max=58707, avg=13138.10, stdev=7056.42
00:09:01.912      clat (usec): min=150, max=788, avg=202.06, stdev=49.63
00:09:01.912       lat (usec): min=158, max=808, avg=215.20, stdev=53.20
00:09:01.912      clat percentiles (usec):
00:09:01.912       |  1.00th=[  157],  5.00th=[  163], 10.00th=[  165], 20.00th=[  172],
00:09:01.912       | 30.00th=[  176], 40.00th=[  182], 50.00th=[  188], 60.00th=[  196],
00:09:01.912       | 70.00th=[  204], 80.00th=[  223], 90.00th=[  262], 95.00th=[  285],
00:09:01.912       | 99.00th=[  392], 99.50th=[  437], 99.90th=[  775], 99.95th=[  791],
00:09:01.912       | 99.99th=[  791]
00:09:01.912     bw (  KiB/s): min= 3176, max= 9112, per=44.40%, avg=6144.00, stdev=4197.39, samples=2
00:09:01.912     iops        : min=  794, max= 2278, avg=1536.00, stdev=1049.35, samples=2
00:09:01.912    lat (usec)   : 250=85.69%, 500=13.72%, 750=0.08%, 1000=0.08%
00:09:01.912    lat (msec)   : 50=0.43%
00:09:01.912    cpu          : usr=2.22%, sys=3.86%, ctx=2572, majf=0, minf=2
00:09:01.912    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:01.912       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.912       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:01.912       issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:01.912       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:01.912  
00:09:01.912  Run status group 0 (all jobs):
00:09:01.912     READ: bw=9.84MiB/s (10.3MB/s), 1064KiB/s-4000KiB/s (1089kB/s-4096kB/s), io=10.2MiB (10.7MB), run=1015-1036msec
00:09:01.912    WRITE: bw=13.5MiB/s (14.2MB/s), 2010KiB/s-5931KiB/s (2058kB/s-6073kB/s), io=14.0MiB (14.7MB), run=1015-1036msec
00:09:01.912  
00:09:01.912  Disk stats (read/write):
00:09:01.912    nvme0n1: ios=1065/1024, merge=0/0, ticks=1462/193, in_queue=1655, util=97.49%
00:09:01.912    nvme0n2: ios=296/512, merge=0/0, ticks=735/82, in_queue=817, util=86.53%
00:09:01.912    nvme0n3: ios=283/512, merge=0/0, ticks=1641/107, in_queue=1748, util=97.58%
00:09:01.912    nvme0n4: ios=1031/1536, merge=0/0, ticks=470/291, in_queue=761, util=89.50%
00:09:01.912   03:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:09:01.912  [global]
00:09:01.912  thread=1
00:09:01.912  invalidate=1
00:09:01.912  rw=randwrite
00:09:01.912  time_based=1
00:09:01.912  runtime=1
00:09:01.912  ioengine=libaio
00:09:01.912  direct=1
00:09:01.912  bs=4096
00:09:01.912  iodepth=1
00:09:01.912  norandommap=0
00:09:01.912  numjobs=1
00:09:01.912  
00:09:01.912  verify_dump=1
00:09:01.912  verify_backlog=512
00:09:01.912  verify_state_save=0
00:09:01.912  do_verify=1
00:09:01.912  verify=crc32c-intel
00:09:01.912  [job0]
00:09:01.912  filename=/dev/nvme0n1
00:09:01.912  [job1]
00:09:01.912  filename=/dev/nvme0n2
00:09:01.912  [job2]
00:09:01.912  filename=/dev/nvme0n3
00:09:01.912  [job3]
00:09:01.912  filename=/dev/nvme0n4
00:09:01.912  Could not set queue depth (nvme0n1)
00:09:01.912  Could not set queue depth (nvme0n2)
00:09:01.912  Could not set queue depth (nvme0n3)
00:09:01.912  Could not set queue depth (nvme0n4)
00:09:01.912  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:01.912  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:01.912  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:01.912  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:01.912  fio-3.35
00:09:01.912  Starting 4 threads
00:09:03.283  
00:09:03.283  job0: (groupid=0, jobs=1): err= 0: pid=158949: Mon Dec  9 03:59:31 2024
00:09:03.283    read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec)
00:09:03.283      slat (nsec): min=4600, max=51148, avg=9434.67, stdev=3719.86
00:09:03.283      clat (usec): min=174, max=42052, avg=421.31, stdev=2969.52
00:09:03.283       lat (usec): min=180, max=42065, avg=430.75, stdev=2970.62
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  182],  5.00th=[  186], 10.00th=[  190], 20.00th=[  194],
00:09:03.283       | 30.00th=[  198], 40.00th=[  202], 50.00th=[  206], 60.00th=[  208],
00:09:03.283       | 70.00th=[  212], 80.00th=[  217], 90.00th=[  225], 95.00th=[  233],
00:09:03.283       | 99.00th=[  297], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206],
00:09:03.283       | 99.99th=[42206]
00:09:03.283    write: IOPS=1865, BW=7461KiB/s (7640kB/s)(7468KiB/1001msec); 0 zone resets
00:09:03.283      slat (nsec): min=5953, max=50753, avg=11334.37, stdev=5044.97
00:09:03.283      clat (usec): min=135, max=312, avg=164.69, stdev=21.17
00:09:03.283       lat (usec): min=141, max=323, avg=176.02, stdev=21.75
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  141],  5.00th=[  145], 10.00th=[  149], 20.00th=[  153],
00:09:03.283       | 30.00th=[  155], 40.00th=[  159], 50.00th=[  161], 60.00th=[  165],
00:09:03.283       | 70.00th=[  167], 80.00th=[  174], 90.00th=[  182], 95.00th=[  190],
00:09:03.283       | 99.00th=[  285], 99.50th=[  289], 99.90th=[  306], 99.95th=[  314],
00:09:03.283       | 99.99th=[  314]
00:09:03.283     bw (  KiB/s): min= 8192, max= 8192, per=42.75%, avg=8192.00, stdev= 0.00, samples=1
00:09:03.283     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:09:03.283    lat (usec)   : 250=97.97%, 500=1.76%, 750=0.03%
00:09:03.283    lat (msec)   : 50=0.24%
00:09:03.283    cpu          : usr=1.80%, sys=3.80%, ctx=3404, majf=0, minf=1
00:09:03.283    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:03.283       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       issued rwts: total=1536,1867,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:03.283       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:03.283  job1: (groupid=0, jobs=1): err= 0: pid=158953: Mon Dec  9 03:59:31 2024
00:09:03.283    read: IOPS=1202, BW=4811KiB/s (4927kB/s)(4816KiB/1001msec)
00:09:03.283      slat (nsec): min=4733, max=63137, avg=11742.82, stdev=5613.68
00:09:03.283      clat (usec): min=196, max=41181, avg=517.88, stdev=3313.99
00:09:03.283       lat (usec): min=203, max=41197, avg=529.63, stdev=3313.98
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  202],  5.00th=[  212], 10.00th=[  217], 20.00th=[  223],
00:09:03.283       | 30.00th=[  227], 40.00th=[  233], 50.00th=[  239], 60.00th=[  251],
00:09:03.283       | 70.00th=[  262], 80.00th=[  269], 90.00th=[  277], 95.00th=[  289],
00:09:03.283       | 99.00th=[  523], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:09:03.283       | 99.99th=[41157]
00:09:03.283    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:09:03.283      slat (nsec): min=6110, max=69517, avg=16686.39, stdev=8551.94
00:09:03.283      clat (usec): min=150, max=415, avg=211.48, stdev=57.80
00:09:03.283       lat (usec): min=159, max=469, avg=228.16, stdev=62.52
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  155],  5.00th=[  161], 10.00th=[  165], 20.00th=[  169],
00:09:03.283       | 30.00th=[  178], 40.00th=[  184], 50.00th=[  190], 60.00th=[  198],
00:09:03.283       | 70.00th=[  210], 80.00th=[  239], 90.00th=[  310], 95.00th=[  355],
00:09:03.283       | 99.00th=[  388], 99.50th=[  396], 99.90th=[  412], 99.95th=[  416],
00:09:03.283       | 99.99th=[  416]
00:09:03.283     bw (  KiB/s): min= 4440, max= 4440, per=23.17%, avg=4440.00, stdev= 0.00, samples=1
00:09:03.283     iops        : min= 1110, max= 1110, avg=1110.00, stdev= 0.00, samples=1
00:09:03.283    lat (usec)   : 250=72.04%, 500=27.48%, 750=0.15%
00:09:03.283    lat (msec)   : 2=0.04%, 50=0.29%
00:09:03.283    cpu          : usr=2.50%, sys=5.20%, ctx=2740, majf=0, minf=1
00:09:03.283    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:03.283       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       issued rwts: total=1204,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:03.283       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:03.283  job2: (groupid=0, jobs=1): err= 0: pid=158954: Mon Dec  9 03:59:31 2024
00:09:03.283    read: IOPS=33, BW=134KiB/s (137kB/s)(136KiB/1013msec)
00:09:03.283      slat (nsec): min=10237, max=35794, avg=21417.71, stdev=6894.57
00:09:03.283      clat (usec): min=249, max=42025, avg=26731.12, stdev=19780.35
00:09:03.283       lat (usec): min=266, max=42043, avg=26752.54, stdev=19778.38
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  249],  5.00th=[  265], 10.00th=[  289], 20.00th=[  371],
00:09:03.283       | 30.00th=[  424], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157],
00:09:03.283       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681],
00:09:03.283       | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:09:03.283       | 99.99th=[42206]
00:09:03.283    write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets
00:09:03.283      slat (nsec): min=7921, max=28790, avg=9896.13, stdev=2829.22
00:09:03.283      clat (usec): min=153, max=392, avg=187.09, stdev=16.75
00:09:03.283       lat (usec): min=162, max=401, avg=196.99, stdev=17.38
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  161],  5.00th=[  167], 10.00th=[  172], 20.00th=[  176],
00:09:03.283       | 30.00th=[  180], 40.00th=[  184], 50.00th=[  186], 60.00th=[  188],
00:09:03.283       | 70.00th=[  192], 80.00th=[  198], 90.00th=[  206], 95.00th=[  215],
00:09:03.283       | 99.00th=[  227], 99.50th=[  235], 99.90th=[  392], 99.95th=[  392],
00:09:03.283       | 99.99th=[  392]
00:09:03.283     bw (  KiB/s): min= 4096, max= 4096, per=21.38%, avg=4096.00, stdev= 0.00, samples=1
00:09:03.283     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:09:03.283    lat (usec)   : 250=93.59%, 500=2.20%, 750=0.18%
00:09:03.283    lat (msec)   : 50=4.03%
00:09:03.283    cpu          : usr=0.59%, sys=0.40%, ctx=548, majf=0, minf=1
00:09:03.283    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:03.283       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:03.283       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:03.283  job3: (groupid=0, jobs=1): err= 0: pid=158955: Mon Dec  9 03:59:31 2024
00:09:03.283    read: IOPS=838, BW=3352KiB/s (3433kB/s)(3456KiB/1031msec)
00:09:03.283      slat (nsec): min=7236, max=62924, avg=13670.91, stdev=6250.63
00:09:03.283      clat (usec): min=211, max=41335, avg=925.93, stdev=5144.31
00:09:03.283       lat (usec): min=219, max=41355, avg=939.60, stdev=5144.52
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  225],  5.00th=[  233], 10.00th=[  237], 20.00th=[  243],
00:09:03.283       | 30.00th=[  247], 40.00th=[  253], 50.00th=[  258], 60.00th=[  265],
00:09:03.283       | 70.00th=[  269], 80.00th=[  277], 90.00th=[  293], 95.00th=[  388],
00:09:03.283       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:09:03.283       | 99.99th=[41157]
00:09:03.283    write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets
00:09:03.283      slat (nsec): min=7976, max=66119, avg=15249.46, stdev=7260.53
00:09:03.283      clat (usec): min=144, max=414, avg=190.15, stdev=25.35
00:09:03.283       lat (usec): min=155, max=440, avg=205.40, stdev=25.78
00:09:03.283      clat percentiles (usec):
00:09:03.283       |  1.00th=[  153],  5.00th=[  165], 10.00th=[  169], 20.00th=[  176],
00:09:03.283       | 30.00th=[  180], 40.00th=[  182], 50.00th=[  186], 60.00th=[  190],
00:09:03.283       | 70.00th=[  194], 80.00th=[  202], 90.00th=[  212], 95.00th=[  229],
00:09:03.283       | 99.00th=[  297], 99.50th=[  318], 99.90th=[  404], 99.95th=[  416],
00:09:03.283       | 99.99th=[  416]
00:09:03.283     bw (  KiB/s): min= 8192, max= 8192, per=42.75%, avg=8192.00, stdev= 0.00, samples=1
00:09:03.283     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:09:03.283    lat (usec)   : 250=69.44%, 500=29.18%, 750=0.64%
00:09:03.283    lat (msec)   : 50=0.74%
00:09:03.283    cpu          : usr=1.84%, sys=3.50%, ctx=1889, majf=0, minf=1
00:09:03.283    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:03.283       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:03.283       issued rwts: total=864,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:03.283       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:03.283  
00:09:03.283  Run status group 0 (all jobs):
00:09:03.283     READ: bw=13.8MiB/s (14.5MB/s), 134KiB/s-6138KiB/s (137kB/s-6285kB/s), io=14.2MiB (14.9MB), run=1001-1031msec
00:09:03.283    WRITE: bw=18.7MiB/s (19.6MB/s), 2022KiB/s-7461KiB/s (2070kB/s-7640kB/s), io=19.3MiB (20.2MB), run=1001-1031msec
00:09:03.283  
00:09:03.283  Disk stats (read/write):
00:09:03.283    nvme0n1: ios=1160/1536, merge=0/0, ticks=1108/257, in_queue=1365, util=97.39%
00:09:03.283    nvme0n2: ios=1024/1234, merge=0/0, ticks=521/261, in_queue=782, util=86.48%
00:09:03.283    nvme0n3: ios=53/512, merge=0/0, ticks=1735/90, in_queue=1825, util=97.91%
00:09:03.283    nvme0n4: ios=737/1024, merge=0/0, ticks=1536/188, in_queue=1724, util=97.68%
00:09:03.283   03:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:09:03.283  [global]
00:09:03.283  thread=1
00:09:03.283  invalidate=1
00:09:03.283  rw=write
00:09:03.283  time_based=1
00:09:03.283  runtime=1
00:09:03.283  ioengine=libaio
00:09:03.283  direct=1
00:09:03.283  bs=4096
00:09:03.283  iodepth=128
00:09:03.283  norandommap=0
00:09:03.283  numjobs=1
00:09:03.283  
00:09:03.283  verify_dump=1
00:09:03.283  verify_backlog=512
00:09:03.283  verify_state_save=0
00:09:03.283  do_verify=1
00:09:03.283  verify=crc32c-intel
00:09:03.283  [job0]
00:09:03.283  filename=/dev/nvme0n1
00:09:03.284  [job1]
00:09:03.284  filename=/dev/nvme0n2
00:09:03.284  [job2]
00:09:03.284  filename=/dev/nvme0n3
00:09:03.284  [job3]
00:09:03.284  filename=/dev/nvme0n4
00:09:03.284  Could not set queue depth (nvme0n1)
00:09:03.284  Could not set queue depth (nvme0n2)
00:09:03.284  Could not set queue depth (nvme0n3)
00:09:03.284  Could not set queue depth (nvme0n4)
00:09:03.540  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:03.540  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:03.540  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:03.540  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:03.540  fio-3.35
00:09:03.540  Starting 4 threads
00:09:04.919  
00:09:04.919  job0: (groupid=0, jobs=1): err= 0: pid=159273: Mon Dec  9 03:59:33 2024
00:09:04.919    read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec)
00:09:04.919      slat (usec): min=2, max=16296, avg=177.44, stdev=1105.48
00:09:04.919      clat (usec): min=6439, max=55123, avg=23079.68, stdev=9680.74
00:09:04.919       lat (usec): min=6445, max=55162, avg=23257.13, stdev=9789.63
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 9372],  5.00th=[11338], 10.00th=[13960], 20.00th=[16450],
00:09:04.919       | 30.00th=[17171], 40.00th=[17957], 50.00th=[19530], 60.00th=[20841],
00:09:04.919       | 70.00th=[23462], 80.00th=[32375], 90.00th=[40109], 95.00th=[43779],
00:09:04.919       | 99.00th=[47449], 99.50th=[47973], 99.90th=[52691], 99.95th=[54264],
00:09:04.919       | 99.99th=[55313]
00:09:04.919    write: IOPS=3082, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1003msec); 0 zone resets
00:09:04.919      slat (usec): min=3, max=14458, avg=135.82, stdev=835.11
00:09:04.919      clat (usec): min=2751, max=48305, avg=18010.95, stdev=7180.76
00:09:04.919       lat (usec): min=5729, max=48342, avg=18146.77, stdev=7245.69
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 6718],  5.00th=[ 8848], 10.00th=[10683], 20.00th=[13173],
00:09:04.919       | 30.00th=[15008], 40.00th=[15401], 50.00th=[16057], 60.00th=[17171],
00:09:04.919       | 70.00th=[19268], 80.00th=[22152], 90.00th=[28181], 95.00th=[33817],
00:09:04.919       | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[45351],
00:09:04.919       | 99.99th=[48497]
00:09:04.919     bw (  KiB/s): min= 8200, max=16376, per=19.13%, avg=12288.00, stdev=5781.31, samples=2
00:09:04.919     iops        : min= 2050, max= 4094, avg=3072.00, stdev=1445.33, samples=2
00:09:04.919    lat (msec)   : 4=0.02%, 10=4.59%, 20=57.97%, 50=37.31%, 100=0.11%
00:09:04.919    cpu          : usr=3.39%, sys=6.18%, ctx=223, majf=0, minf=1
00:09:04.919    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0%
00:09:04.919       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:04.919       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:04.919       issued rwts: total=3072,3092,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:04.919       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:04.919  job1: (groupid=0, jobs=1): err= 0: pid=159293: Mon Dec  9 03:59:33 2024
00:09:04.919    read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec)
00:09:04.919      slat (usec): min=3, max=8718, avg=117.86, stdev=635.66
00:09:04.919      clat (usec): min=7958, max=35708, avg=15265.76, stdev=3696.31
00:09:04.919       lat (usec): min=8342, max=35713, avg=15383.62, stdev=3758.73
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 8455],  5.00th=[10814], 10.00th=[11469], 20.00th=[11994],
00:09:04.919       | 30.00th=[13173], 40.00th=[14091], 50.00th=[15008], 60.00th=[15664],
00:09:04.919       | 70.00th=[16450], 80.00th=[17695], 90.00th=[19268], 95.00th=[21627],
00:09:04.919       | 99.00th=[29492], 99.50th=[31589], 99.90th=[35914], 99.95th=[35914],
00:09:04.919       | 99.99th=[35914]
00:09:04.919    write: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec); 0 zone resets
00:09:04.919      slat (usec): min=4, max=8397, avg=143.87, stdev=639.46
00:09:04.919      clat (usec): min=4735, max=48787, avg=19614.08, stdev=8899.55
00:09:04.919       lat (usec): min=5455, max=48801, avg=19757.95, stdev=8961.75
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 9372],  5.00th=[10945], 10.00th=[11207], 20.00th=[11863],
00:09:04.919       | 30.00th=[12387], 40.00th=[14353], 50.00th=[15270], 60.00th=[20841],
00:09:04.919       | 70.00th=[23987], 80.00th=[28181], 90.00th=[33424], 95.00th=[38011],
00:09:04.919       | 99.00th=[41157], 99.50th=[43779], 99.90th=[49021], 99.95th=[49021],
00:09:04.919       | 99.99th=[49021]
00:09:04.919     bw (  KiB/s): min=12344, max=16384, per=22.36%, avg=14364.00, stdev=2856.71, samples=2
00:09:04.919     iops        : min= 3086, max= 4096, avg=3591.00, stdev=714.18, samples=2
00:09:04.919    lat (msec)   : 10=2.34%, 20=73.06%, 50=24.59%
00:09:04.919    cpu          : usr=4.97%, sys=9.94%, ctx=389, majf=0, minf=2
00:09:04.919    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1%
00:09:04.919       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:04.919       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:04.919       issued rwts: total=3584,3711,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:04.919       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:04.919  job2: (groupid=0, jobs=1): err= 0: pid=159302: Mon Dec  9 03:59:33 2024
00:09:04.919    read: IOPS=4886, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec)
00:09:04.919      slat (usec): min=3, max=4070, avg=94.53, stdev=478.69
00:09:04.919      clat (usec): min=2243, max=19514, avg=12759.06, stdev=1568.99
00:09:04.919       lat (usec): min=2249, max=20945, avg=12853.59, stdev=1576.75
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 6194],  5.00th=[10159], 10.00th=[10945], 20.00th=[12256],
00:09:04.919       | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173],
00:09:04.919       | 70.00th=[13304], 80.00th=[13435], 90.00th=[13960], 95.00th=[14615],
00:09:04.919       | 99.00th=[17171], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530],
00:09:04.919       | 99.99th=[19530]
00:09:04.919    write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets
00:09:04.919      slat (usec): min=4, max=6103, avg=92.87, stdev=463.88
00:09:04.919      clat (usec): min=8584, max=19401, avg=12526.41, stdev=1437.80
00:09:04.919       lat (usec): min=8610, max=19415, avg=12619.27, stdev=1442.06
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 9372],  5.00th=[ 9896], 10.00th=[10159], 20.00th=[11863],
00:09:04.919       | 30.00th=[12125], 40.00th=[12256], 50.00th=[12649], 60.00th=[12780],
00:09:04.919       | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[15139],
00:09:04.919       | 99.00th=[16581], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268],
00:09:04.919       | 99.99th=[19530]
00:09:04.919     bw (  KiB/s): min=20480, max=20480, per=31.89%, avg=20480.00, stdev= 0.00, samples=2
00:09:04.919     iops        : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2
00:09:04.919    lat (msec)   : 4=0.23%, 10=5.41%, 20=94.36%
00:09:04.919    cpu          : usr=7.29%, sys=13.07%, ctx=442, majf=0, minf=1
00:09:04.919    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:09:04.919       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:04.919       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:04.919       issued rwts: total=4901,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:04.919       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:04.919  job3: (groupid=0, jobs=1): err= 0: pid=159303: Mon Dec  9 03:59:33 2024
00:09:04.919    read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec)
00:09:04.919      slat (usec): min=2, max=32298, avg=127.10, stdev=1078.68
00:09:04.919      clat (usec): min=4467, max=81539, avg=15366.06, stdev=9035.78
00:09:04.919       lat (usec): min=4474, max=81553, avg=15493.16, stdev=9133.92
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 6849],  5.00th=[ 9634], 10.00th=[11731], 20.00th=[12125],
00:09:04.919       | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649],
00:09:04.919       | 70.00th=[12911], 80.00th=[14877], 90.00th=[21627], 95.00th=[34341],
00:09:04.919       | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129],
00:09:04.919       | 99.99th=[81265]
00:09:04.919    write: IOPS=4263, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1011msec); 0 zone resets
00:09:04.919      slat (usec): min=3, max=26912, avg=103.67, stdev=820.98
00:09:04.919      clat (usec): min=2820, max=78395, avg=14693.62, stdev=8970.95
00:09:04.919       lat (usec): min=2829, max=78407, avg=14797.29, stdev=9061.49
00:09:04.919      clat percentiles (usec):
00:09:04.919       |  1.00th=[ 4178],  5.00th=[ 7767], 10.00th=[ 9634], 20.00th=[11469],
00:09:04.919       | 30.00th=[11731], 40.00th=[11994], 50.00th=[12518], 60.00th=[12649],
00:09:04.919       | 70.00th=[13304], 80.00th=[13435], 90.00th=[23200], 95.00th=[40109],
00:09:04.919       | 99.00th=[51119], 99.50th=[51119], 99.90th=[62129], 99.95th=[66847],
00:09:04.919       | 99.99th=[78119]
00:09:04.920     bw (  KiB/s): min=12880, max=20584, per=26.05%, avg=16732.00, stdev=5447.55, samples=2
00:09:04.920     iops        : min= 3220, max= 5146, avg=4183.00, stdev=1361.89, samples=2
00:09:04.920    lat (msec)   : 4=0.51%, 10=8.37%, 20=78.61%, 50=10.94%, 100=1.56%
00:09:04.920    cpu          : usr=3.56%, sys=6.63%, ctx=438, majf=0, minf=1
00:09:04.920    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:09:04.920       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:04.920       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:04.920       issued rwts: total=4096,4310,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:04.920       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:04.920  
00:09:04.920  Run status group 0 (all jobs):
00:09:04.920     READ: bw=60.5MiB/s (63.4MB/s), 12.0MiB/s-19.1MiB/s (12.5MB/s-20.0MB/s), io=61.1MiB (64.1MB), run=1003-1011msec
00:09:04.920    WRITE: bw=62.7MiB/s (65.8MB/s), 12.0MiB/s-19.9MiB/s (12.6MB/s-20.9MB/s), io=63.4MiB (66.5MB), run=1003-1011msec
00:09:04.920  
00:09:04.920  Disk stats (read/write):
00:09:04.920    nvme0n1: ios=2580/2791, merge=0/0, ticks=30724/25704, in_queue=56428, util=98.90%
00:09:04.920    nvme0n2: ios=3121/3207, merge=0/0, ticks=22274/28527, in_queue=50801, util=87.70%
00:09:04.920    nvme0n3: ios=4148/4344, merge=0/0, ticks=17500/16480, in_queue=33980, util=98.85%
00:09:04.920    nvme0n4: ios=3249/3584, merge=0/0, ticks=29848/28684, in_queue=58532, util=97.79%
00:09:04.920   03:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:09:04.920  [global]
00:09:04.920  thread=1
00:09:04.920  invalidate=1
00:09:04.920  rw=randwrite
00:09:04.920  time_based=1
00:09:04.920  runtime=1
00:09:04.920  ioengine=libaio
00:09:04.920  direct=1
00:09:04.920  bs=4096
00:09:04.920  iodepth=128
00:09:04.920  norandommap=0
00:09:04.920  numjobs=1
00:09:04.920  
00:09:04.920  verify_dump=1
00:09:04.920  verify_backlog=512
00:09:04.920  verify_state_save=0
00:09:04.920  do_verify=1
00:09:04.920  verify=crc32c-intel
00:09:04.920  [job0]
00:09:04.920  filename=/dev/nvme0n1
00:09:04.920  [job1]
00:09:04.920  filename=/dev/nvme0n2
00:09:04.920  [job2]
00:09:04.920  filename=/dev/nvme0n3
00:09:04.920  [job3]
00:09:04.920  filename=/dev/nvme0n4
00:09:04.920  Could not set queue depth (nvme0n1)
00:09:04.920  Could not set queue depth (nvme0n2)
00:09:04.920  Could not set queue depth (nvme0n3)
00:09:04.920  Could not set queue depth (nvme0n4)
00:09:04.920  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:04.920  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:04.920  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:04.920  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:09:04.920  fio-3.35
00:09:04.920  Starting 4 threads
00:09:06.296  
00:09:06.296  job0: (groupid=0, jobs=1): err= 0: pid=159533: Mon Dec  9 03:59:34 2024
00:09:06.296    read: IOPS=4019, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1019msec)
00:09:06.296      slat (usec): min=2, max=11600, avg=102.25, stdev=635.55
00:09:06.296      clat (usec): min=5928, max=36238, avg=12977.05, stdev=3233.51
00:09:06.296       lat (usec): min=5938, max=36245, avg=13079.30, stdev=3290.08
00:09:06.296      clat percentiles (usec):
00:09:06.296       |  1.00th=[ 7373],  5.00th=[10159], 10.00th=[11469], 20.00th=[11994],
00:09:06.296       | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518],
00:09:06.296       | 70.00th=[12780], 80.00th=[12911], 90.00th=[14484], 95.00th=[18220],
00:09:06.296       | 99.00th=[30016], 99.50th=[32375], 99.90th=[36439], 99.95th=[36439],
00:09:06.296       | 99.99th=[36439]
00:09:06.296    write: IOPS=4410, BW=17.2MiB/s (18.1MB/s)(17.6MiB/1019msec); 0 zone resets
00:09:06.296      slat (usec): min=4, max=10327, avg=118.54, stdev=622.34
00:09:06.296      clat (usec): min=3101, max=63815, avg=16837.99, stdev=11886.55
00:09:06.296       lat (usec): min=3111, max=63821, avg=16956.52, stdev=11954.66
00:09:06.296      clat percentiles (usec):
00:09:06.296       |  1.00th=[ 4686],  5.00th=[ 8094], 10.00th=[10028], 20.00th=[10945],
00:09:06.296       | 30.00th=[11600], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649],
00:09:06.296       | 70.00th=[13042], 80.00th=[22676], 90.00th=[24249], 95.00th=[45876],
00:09:06.296       | 99.00th=[63177], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701],
00:09:06.296       | 99.99th=[63701]
00:09:06.296     bw (  KiB/s): min=14024, max=20904, per=25.04%, avg=17464.00, stdev=4864.89, samples=2
00:09:06.296     iops        : min= 3506, max= 5226, avg=4366.00, stdev=1216.22, samples=2
00:09:06.296    lat (msec)   : 4=0.15%, 10=7.37%, 20=77.96%, 50=11.96%, 100=2.56%
00:09:06.296    cpu          : usr=5.11%, sys=8.74%, ctx=398, majf=0, minf=1
00:09:06.296    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:09:06.296       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:06.296       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:06.296       issued rwts: total=4096,4494,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:06.296       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:06.296  job1: (groupid=0, jobs=1): err= 0: pid=159534: Mon Dec  9 03:59:34 2024
00:09:06.296    read: IOPS=3087, BW=12.1MiB/s (12.6MB/s)(12.2MiB/1015msec)
00:09:06.296      slat (usec): min=2, max=14100, avg=113.40, stdev=765.17
00:09:06.296      clat (usec): min=5987, max=33309, avg=14064.06, stdev=4265.05
00:09:06.296       lat (usec): min=6008, max=33316, avg=14177.46, stdev=4313.53
00:09:06.296      clat percentiles (usec):
00:09:06.296       |  1.00th=[ 7242],  5.00th=[ 9634], 10.00th=[11076], 20.00th=[11731],
00:09:06.296       | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911],
00:09:06.296       | 70.00th=[13960], 80.00th=[15664], 90.00th=[20317], 95.00th=[23462],
00:09:06.296       | 99.00th=[30540], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424],
00:09:06.296       | 99.99th=[33424]
00:09:06.296    write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec); 0 zone resets
00:09:06.296      slat (usec): min=4, max=15877, avg=170.79, stdev=1016.68
00:09:06.296      clat (msec): min=3, max=117, avg=23.63, stdev=20.46
00:09:06.296       lat (msec): min=3, max=117, avg=23.80, stdev=20.60
00:09:06.296      clat percentiles (msec):
00:09:06.296       |  1.00th=[    5],  5.00th=[   10], 10.00th=[   11], 20.00th=[   12],
00:09:06.296       | 30.00th=[   13], 40.00th=[   14], 50.00th=[   18], 60.00th=[   19],
00:09:06.296       | 70.00th=[   23], 80.00th=[   25], 90.00th=[   54], 95.00th=[   62],
00:09:06.296       | 99.00th=[  113], 99.50th=[  116], 99.90th=[  118], 99.95th=[  118],
00:09:06.296       | 99.99th=[  118]
00:09:06.296     bw (  KiB/s): min=11136, max=17042, per=20.20%, avg=14089.00, stdev=4176.17, samples=2
00:09:06.296     iops        : min= 2784, max= 4260, avg=3522.00, stdev=1043.69, samples=2
00:09:06.296    lat (msec)   : 4=0.27%, 10=6.09%, 20=68.65%, 50=19.01%, 100=4.67%
00:09:06.297    lat (msec)   : 250=1.31%
00:09:06.297    cpu          : usr=3.06%, sys=7.69%, ctx=319, majf=0, minf=1
00:09:06.297    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1%
00:09:06.297       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:06.297       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:06.297       issued rwts: total=3134,3584,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:06.297       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:06.297  job2: (groupid=0, jobs=1): err= 0: pid=159535: Mon Dec  9 03:59:34 2024
00:09:06.297    read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec)
00:09:06.297      slat (usec): min=2, max=12809, avg=116.28, stdev=810.75
00:09:06.297      clat (usec): min=4578, max=27873, avg=14378.66, stdev=3621.19
00:09:06.297       lat (usec): min=4598, max=27888, avg=14494.94, stdev=3670.55
00:09:06.297      clat percentiles (usec):
00:09:06.297       |  1.00th=[ 5997],  5.00th=[10159], 10.00th=[11731], 20.00th=[12125],
00:09:06.297       | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13960],
00:09:06.297       | 70.00th=[15008], 80.00th=[16909], 90.00th=[19530], 95.00th=[22152],
00:09:06.297       | 99.00th=[24773], 99.50th=[26346], 99.90th=[27919], 99.95th=[27919],
00:09:06.297       | 99.99th=[27919]
00:09:06.297    write: IOPS=5012, BW=19.6MiB/s (20.5MB/s)(19.8MiB/1013msec); 0 zone resets
00:09:06.297      slat (usec): min=4, max=10232, avg=81.65, stdev=395.21
00:09:06.297      clat (usec): min=1153, max=26481, avg=12241.08, stdev=2860.71
00:09:06.297       lat (usec): min=1431, max=26501, avg=12322.73, stdev=2896.40
00:09:06.297      clat percentiles (usec):
00:09:06.297       |  1.00th=[ 4015],  5.00th=[ 5735], 10.00th=[ 7963], 20.00th=[11338],
00:09:06.297       | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173],
00:09:06.297       | 70.00th=[13304], 80.00th=[13566], 90.00th=[14484], 95.00th=[14615],
00:09:06.297       | 99.00th=[21365], 99.50th=[23462], 99.90th=[24511], 99.95th=[26346],
00:09:06.297       | 99.99th=[26608]
00:09:06.297     bw (  KiB/s): min=19136, max=20464, per=28.39%, avg=19800.00, stdev=939.04, samples=2
00:09:06.297     iops        : min= 4784, max= 5116, avg=4950.00, stdev=234.76, samples=2
00:09:06.297    lat (msec)   : 2=0.07%, 4=0.43%, 10=10.06%, 20=84.37%, 50=5.07%
00:09:06.297    cpu          : usr=6.32%, sys=9.39%, ctx=568, majf=0, minf=2
00:09:06.297    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:09:06.297       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:06.297       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:06.297       issued rwts: total=4608,5078,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:06.297       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:06.297  job3: (groupid=0, jobs=1): err= 0: pid=159536: Mon Dec  9 03:59:34 2024
00:09:06.297    read: IOPS=4323, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1006msec)
00:09:06.297      slat (usec): min=2, max=13423, avg=107.07, stdev=696.19
00:09:06.297      clat (usec): min=1909, max=27081, avg=13793.14, stdev=2453.33
00:09:06.297       lat (usec): min=5545, max=27093, avg=13900.21, stdev=2488.90
00:09:06.297      clat percentiles (usec):
00:09:06.297       |  1.00th=[ 6849],  5.00th=[11076], 10.00th=[11469], 20.00th=[12387],
00:09:06.297       | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960],
00:09:06.297       | 70.00th=[14222], 80.00th=[14746], 90.00th=[16188], 95.00th=[18482],
00:09:06.297       | 99.00th=[23462], 99.50th=[25297], 99.90th=[26870], 99.95th=[26870],
00:09:06.297       | 99.99th=[27132]
00:09:06.297    write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets
00:09:06.297      slat (usec): min=3, max=29394, avg=107.39, stdev=842.95
00:09:06.297      clat (usec): min=1842, max=64910, avg=14577.40, stdev=6905.82
00:09:06.297       lat (usec): min=1871, max=64931, avg=14684.79, stdev=6961.98
00:09:06.297      clat percentiles (usec):
00:09:06.297       |  1.00th=[ 6063],  5.00th=[ 8586], 10.00th=[10159], 20.00th=[11469],
00:09:06.297       | 30.00th=[11994], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698],
00:09:06.297       | 70.00th=[14091], 80.00th=[14353], 90.00th=[18220], 95.00th=[35390],
00:09:06.297       | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[52167],
00:09:06.297       | 99.99th=[64750]
00:09:06.297     bw (  KiB/s): min=16384, max=20480, per=26.43%, avg=18432.00, stdev=2896.31, samples=2
00:09:06.297     iops        : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2
00:09:06.297    lat (msec)   : 2=0.07%, 10=5.27%, 20=88.94%, 50=5.69%, 100=0.03%
00:09:06.297    cpu          : usr=4.88%, sys=5.87%, ctx=388, majf=0, minf=1
00:09:06.297    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:09:06.297       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:06.297       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:09:06.297       issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:06.297       latency   : target=0, window=0, percentile=100.00%, depth=128
00:09:06.297  
00:09:06.297  Run status group 0 (all jobs):
00:09:06.297     READ: bw=62.1MiB/s (65.1MB/s), 12.1MiB/s-17.8MiB/s (12.6MB/s-18.6MB/s), io=63.2MiB (66.3MB), run=1006-1019msec
00:09:06.297    WRITE: bw=68.1MiB/s (71.4MB/s), 13.8MiB/s-19.6MiB/s (14.5MB/s-20.5MB/s), io=69.4MiB (72.8MB), run=1006-1019msec
00:09:06.297  
00:09:06.297  Disk stats (read/write):
00:09:06.297    nvme0n1: ios=3634/3935, merge=0/0, ticks=27722/36413, in_queue=64135, util=86.47%
00:09:06.297    nvme0n2: ios=2728/3072, merge=0/0, ticks=36643/67031, in_queue=103674, util=100.00%
00:09:06.297    nvme0n3: ios=3891/4096, merge=0/0, ticks=54028/48621, in_queue=102649, util=98.12%
00:09:06.297    nvme0n4: ios=3635/3775, merge=0/0, ticks=31004/31472, in_queue=62476, util=97.89%
00:09:06.297   03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync
00:09:06.297   03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=159674
00:09:06.297   03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:09:06.297   03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3
00:09:06.297  [global]
00:09:06.297  thread=1
00:09:06.297  invalidate=1
00:09:06.297  rw=read
00:09:06.297  time_based=1
00:09:06.297  runtime=10
00:09:06.297  ioengine=libaio
00:09:06.297  direct=1
00:09:06.297  bs=4096
00:09:06.297  iodepth=1
00:09:06.297  norandommap=1
00:09:06.297  numjobs=1
00:09:06.297  
00:09:06.297  [job0]
00:09:06.297  filename=/dev/nvme0n1
00:09:06.297  [job1]
00:09:06.297  filename=/dev/nvme0n2
00:09:06.297  [job2]
00:09:06.297  filename=/dev/nvme0n3
00:09:06.297  [job3]
00:09:06.297  filename=/dev/nvme0n4
00:09:06.297  Could not set queue depth (nvme0n1)
00:09:06.297  Could not set queue depth (nvme0n2)
00:09:06.297  Could not set queue depth (nvme0n3)
00:09:06.297  Could not set queue depth (nvme0n4)
00:09:06.297  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:06.297  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:06.297  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:06.297  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:09:06.297  fio-3.35
00:09:06.297  Starting 4 threads
00:09:09.577   03:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0
00:09:09.577   03:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0
00:09:09.577  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=303104, buflen=4096
00:09:09.577  fio: pid=159770, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:09:09.835   03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:09:09.835   03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:09:09.835  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4165632, buflen=4096
00:09:09.835  fio: pid=159769, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:09:10.095  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=29044736, buflen=4096
00:09:10.095  fio: pid=159767, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:09:10.095   03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:09:10.095   03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:09:10.354   03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:09:10.354   03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:09:10.354  fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=10534912, buflen=4096
00:09:10.354  fio: pid=159768, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error
00:09:10.354  
00:09:10.354  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=159767: Mon Dec  9 03:59:38 2024
00:09:10.354    read: IOPS=2022, BW=8090KiB/s (8284kB/s)(27.7MiB/3506msec)
00:09:10.354      slat (usec): min=4, max=10947, avg=15.06, stdev=210.20
00:09:10.354      clat (usec): min=171, max=41992, avg=473.32, stdev=3013.47
00:09:10.354       lat (usec): min=177, max=51812, avg=488.38, stdev=3041.51
00:09:10.354      clat percentiles (usec):
00:09:10.354       |  1.00th=[  188],  5.00th=[  196], 10.00th=[  200], 20.00th=[  208],
00:09:10.354       | 30.00th=[  217], 40.00th=[  227], 50.00th=[  235], 60.00th=[  243],
00:09:10.354       | 70.00th=[  251], 80.00th=[  265], 90.00th=[  318], 95.00th=[  396],
00:09:10.354       | 99.00th=[  529], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681],
00:09:10.354       | 99.99th=[42206]
00:09:10.354     bw (  KiB/s): min=  160, max=15768, per=79.92%, avg=9009.33, stdev=7127.75, samples=6
00:09:10.354     iops        : min=   40, max= 3942, avg=2252.33, stdev=1781.94, samples=6
00:09:10.354    lat (usec)   : 250=68.92%, 500=29.10%, 750=1.35%, 1000=0.01%
00:09:10.354    lat (msec)   : 2=0.04%, 50=0.55%
00:09:10.354    cpu          : usr=1.54%, sys=3.00%, ctx=7100, majf=0, minf=1
00:09:10.354    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:10.354       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.354       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.354       issued rwts: total=7092,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:10.354       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:10.354  job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=159768: Mon Dec  9 03:59:38 2024
00:09:10.354    read: IOPS=674, BW=2696KiB/s (2761kB/s)(10.0MiB/3816msec)
00:09:10.354      slat (usec): min=3, max=8599, avg=15.98, stdev=263.38
00:09:10.354      clat (usec): min=157, max=42028, avg=1466.13, stdev=7030.01
00:09:10.354       lat (usec): min=162, max=42043, avg=1479.50, stdev=7035.30
00:09:10.354      clat percentiles (usec):
00:09:10.354       |  1.00th=[  167],  5.00th=[  178], 10.00th=[  184], 20.00th=[  190],
00:09:10.354       | 30.00th=[  194], 40.00th=[  198], 50.00th=[  202], 60.00th=[  208],
00:09:10.354       | 70.00th=[  221], 80.00th=[  243], 90.00th=[  269], 95.00th=[  375],
00:09:10.354       | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206],
00:09:10.354       | 99.99th=[42206]
00:09:10.354     bw (  KiB/s): min=   96, max=10824, per=24.15%, avg=2722.29, stdev=4569.83, samples=7
00:09:10.354     iops        : min=   24, max= 2706, avg=680.57, stdev=1142.46, samples=7
00:09:10.354    lat (usec)   : 250=83.56%, 500=12.98%, 750=0.31%
00:09:10.354    lat (msec)   : 2=0.04%, 50=3.07%
00:09:10.354    cpu          : usr=0.13%, sys=0.76%, ctx=2578, majf=0, minf=1
00:09:10.354    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:10.354       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.354       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.354       issued rwts: total=2573,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:10.354       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:10.354  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=159769: Mon Dec  9 03:59:38 2024
00:09:10.354    read: IOPS=312, BW=1250KiB/s (1280kB/s)(4068KiB/3254msec)
00:09:10.354      slat (usec): min=5, max=8918, avg=22.77, stdev=279.13
00:09:10.354      clat (usec): min=192, max=41993, avg=3163.96, stdev=10388.15
00:09:10.354       lat (usec): min=198, max=42028, avg=3186.73, stdev=10391.91
00:09:10.354      clat percentiles (usec):
00:09:10.354       |  1.00th=[  202],  5.00th=[  208], 10.00th=[  215], 20.00th=[  231],
00:09:10.354       | 30.00th=[  273], 40.00th=[  297], 50.00th=[  318], 60.00th=[  330],
00:09:10.354       | 70.00th=[  347], 80.00th=[  453], 90.00th=[  510], 95.00th=[41157],
00:09:10.354       | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:09:10.354       | 99.99th=[42206]
00:09:10.354     bw (  KiB/s): min=   96, max= 7544, per=11.93%, avg=1345.33, stdev=3036.72, samples=6
00:09:10.354     iops        : min=   24, max= 1886, avg=336.33, stdev=759.18, samples=6
00:09:10.354    lat (usec)   : 250=25.93%, 500=61.69%, 750=5.30%
00:09:10.354    lat (msec)   : 50=6.97%
00:09:10.354    cpu          : usr=0.15%, sys=0.52%, ctx=1020, majf=0, minf=1
00:09:10.354    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:10.354       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.354       complete  : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.354       issued rwts: total=1018,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:10.354       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:10.354  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=159770: Mon Dec  9 03:59:38 2024
00:09:10.354    read: IOPS=25, BW=100KiB/s (103kB/s)(296KiB/2946msec)
00:09:10.354      slat (nsec): min=12911, max=45911, avg=22389.53, stdev=8803.28
00:09:10.354      clat (usec): min=311, max=42049, avg=39460.74, stdev=8101.56
00:09:10.354       lat (usec): min=324, max=42064, avg=39483.00, stdev=8101.40
00:09:10.354      clat percentiles (usec):
00:09:10.354       |  1.00th=[  310],  5.00th=[40633], 10.00th=[40633], 20.00th=[41157],
00:09:10.355       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:09:10.355       | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206],
00:09:10.355       | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:09:10.355       | 99.99th=[42206]
00:09:10.355     bw (  KiB/s): min=   96, max=  112, per=0.89%, avg=100.80, stdev= 7.16, samples=5
00:09:10.355     iops        : min=   24, max=   28, avg=25.20, stdev= 1.79, samples=5
00:09:10.355    lat (usec)   : 500=4.00%
00:09:10.355    lat (msec)   : 50=94.67%
00:09:10.355    cpu          : usr=0.00%, sys=0.07%, ctx=75, majf=0, minf=2
00:09:10.355    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:09:10.355       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.355       complete  : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:09:10.355       issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:09:10.355       latency   : target=0, window=0, percentile=100.00%, depth=1
00:09:10.355  
00:09:10.355  Run status group 0 (all jobs):
00:09:10.355     READ: bw=11.0MiB/s (11.5MB/s), 100KiB/s-8090KiB/s (103kB/s-8284kB/s), io=42.0MiB (44.0MB), run=2946-3816msec
00:09:10.355  
00:09:10.355  Disk stats (read/write):
00:09:10.355    nvme0n1: ios=6891/0, merge=0/0, ticks=4357/0, in_queue=4357, util=98.66%
00:09:10.355    nvme0n2: ios=2566/0, merge=0/0, ticks=3516/0, in_queue=3516, util=96.06%
00:09:10.355    nvme0n3: ios=1067/0, merge=0/0, ticks=3483/0, in_queue=3483, util=99.13%
00:09:10.355    nvme0n4: ios=72/0, merge=0/0, ticks=2840/0, in_queue=2840, util=96.71%
00:09:10.613   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:09:10.613   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:09:10.872   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:09:10.872   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:09:11.131   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:09:11.131   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:09:11.389   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:09:11.389   03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:09:11.646   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0
00:09:11.646   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 159674
00:09:11.646   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4
00:09:11.646   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:09:11.904  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:09:11.904  nvmf hotplug test: fio failed as expected
00:09:11.904   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:09:12.161   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:09:12.162  rmmod nvme_tcp
00:09:12.162  rmmod nvme_fabrics
00:09:12.162  rmmod nvme_keyring
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 157643 ']'
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 157643
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 157643 ']'
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 157643
00:09:12.162    03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:12.162    03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157643
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157643'
00:09:12.162  killing process with pid 157643
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 157643
00:09:12.162   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 157643
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:12.422   03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:12.422    03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:14.958   03:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:09:14.958  
00:09:14.958  real	0m24.427s
00:09:14.958  user	1m25.560s
00:09:14.958  sys	0m6.609s
00:09:14.958   03:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:14.958   03:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:09:14.958  ************************************
00:09:14.958  END TEST nvmf_fio_target
00:09:14.958  ************************************
00:09:14.958   03:59:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:09:14.958   03:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:14.958   03:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:14.958   03:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:09:14.958  ************************************
00:09:14.958  START TEST nvmf_bdevio
00:09:14.958  ************************************
00:09:14.958   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:09:14.958  * Looking for test storage...
00:09:14.958  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:14.958     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version
00:09:14.958     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-:
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-:
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<'
00:09:14.958    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:14.959  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.959  		--rc genhtml_branch_coverage=1
00:09:14.959  		--rc genhtml_function_coverage=1
00:09:14.959  		--rc genhtml_legend=1
00:09:14.959  		--rc geninfo_all_blocks=1
00:09:14.959  		--rc geninfo_unexecuted_blocks=1
00:09:14.959  		
00:09:14.959  		'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:14.959  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.959  		--rc genhtml_branch_coverage=1
00:09:14.959  		--rc genhtml_function_coverage=1
00:09:14.959  		--rc genhtml_legend=1
00:09:14.959  		--rc geninfo_all_blocks=1
00:09:14.959  		--rc geninfo_unexecuted_blocks=1
00:09:14.959  		
00:09:14.959  		'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:14.959  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.959  		--rc genhtml_branch_coverage=1
00:09:14.959  		--rc genhtml_function_coverage=1
00:09:14.959  		--rc genhtml_legend=1
00:09:14.959  		--rc geninfo_all_blocks=1
00:09:14.959  		--rc geninfo_unexecuted_blocks=1
00:09:14.959  		
00:09:14.959  		'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:14.959  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.959  		--rc genhtml_branch_coverage=1
00:09:14.959  		--rc genhtml_function_coverage=1
00:09:14.959  		--rc genhtml_legend=1
00:09:14.959  		--rc geninfo_all_blocks=1
00:09:14.959  		--rc geninfo_unexecuted_blocks=1
00:09:14.959  		
00:09:14.959  		'
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:14.959     03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:14.959      03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:14.959      03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:14.959      03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:14.959      03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH
00:09:14.959      03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:14.959  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:14.959    03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:09:14.959   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable
00:09:14.960   03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=()
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=()
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=()
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=()
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=()
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=()
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=()
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:09:16.870  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:09:16.870  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:16.870   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:09:16.871  Found net devices under 0000:0a:00.0: cvl_0_0
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:09:16.871  Found net devices under 0000:0a:00.1: cvl_0_1
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:09:16.871   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:09:17.129  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:09:17.129  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms
00:09:17.129  
00:09:17.129  --- 10.0.0.2 ping statistics ---
00:09:17.129  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:17.129  rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:09:17.129  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:09:17.129  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms
00:09:17.129  
00:09:17.129  --- 10.0.0.1 ping statistics ---
00:09:17.129  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:17.129  rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=162511
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 162511
00:09:17.129   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 162511 ']'
00:09:17.130   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:17.130   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:17.130   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:17.130  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:17.130   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:17.130   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.130  [2024-12-09 03:59:45.575714] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:09:17.130  [2024-12-09 03:59:45.575822] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:17.130  [2024-12-09 03:59:45.652031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:17.388  [2024-12-09 03:59:45.716162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:17.388  [2024-12-09 03:59:45.716213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:17.388  [2024-12-09 03:59:45.716242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:17.388  [2024-12-09 03:59:45.716254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:17.388  [2024-12-09 03:59:45.716264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:17.388  [2024-12-09 03:59:45.718017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:09:17.388  [2024-12-09 03:59:45.718081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:09:17.388  [2024-12-09 03:59:45.718133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:09:17.388  [2024-12-09 03:59:45.718136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.388  [2024-12-09 03:59:45.869412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.388  Malloc0
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:17.388  [2024-12-09 03:59:45.929694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:17.388   03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:09:17.388    03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:09:17.388    03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=()
00:09:17.388    03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config
00:09:17.388    03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:09:17.388    03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:09:17.388  {
00:09:17.388    "params": {
00:09:17.388      "name": "Nvme$subsystem",
00:09:17.388      "trtype": "$TEST_TRANSPORT",
00:09:17.388      "traddr": "$NVMF_FIRST_TARGET_IP",
00:09:17.388      "adrfam": "ipv4",
00:09:17.388      "trsvcid": "$NVMF_PORT",
00:09:17.388      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:09:17.388      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:09:17.388      "hdgst": ${hdgst:-false},
00:09:17.388      "ddgst": ${ddgst:-false}
00:09:17.388    },
00:09:17.388    "method": "bdev_nvme_attach_controller"
00:09:17.388  }
00:09:17.388  EOF
00:09:17.388  )")
00:09:17.388     03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat
00:09:17.388    03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq .
00:09:17.388     03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=,
00:09:17.388     03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:09:17.388    "params": {
00:09:17.388      "name": "Nvme1",
00:09:17.388      "trtype": "tcp",
00:09:17.388      "traddr": "10.0.0.2",
00:09:17.388      "adrfam": "ipv4",
00:09:17.388      "trsvcid": "4420",
00:09:17.388      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:09:17.388      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:09:17.388      "hdgst": false,
00:09:17.388      "ddgst": false
00:09:17.388    },
00:09:17.388    "method": "bdev_nvme_attach_controller"
00:09:17.388  }'
00:09:17.646  [2024-12-09 03:59:45.980028] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:09:17.646  [2024-12-09 03:59:45.980093] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162553 ]
00:09:17.646  [2024-12-09 03:59:46.048946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:17.646  [2024-12-09 03:59:46.113307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:17.646  [2024-12-09 03:59:46.113362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:17.646  [2024-12-09 03:59:46.113366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:17.904  I/O targets:
00:09:17.904    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:09:17.904  
00:09:17.904  
00:09:17.904       CUnit - A unit testing framework for C - Version 2.1-3
00:09:17.904       http://cunit.sourceforge.net/
00:09:17.904  
00:09:17.904  
00:09:17.904  Suite: bdevio tests on: Nvme1n1
00:09:17.904    Test: blockdev write read block ...passed
00:09:18.162    Test: blockdev write zeroes read block ...passed
00:09:18.162    Test: blockdev write zeroes read no split ...passed
00:09:18.162    Test: blockdev write zeroes read split ...passed
00:09:18.163    Test: blockdev write zeroes read split partial ...passed
00:09:18.163    Test: blockdev reset ...[2024-12-09 03:59:46.533347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:09:18.163  [2024-12-09 03:59:46.533466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f58c0 (9): Bad file descriptor
00:09:18.163  [2024-12-09 03:59:46.551533] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:09:18.163  passed
00:09:18.163    Test: blockdev write read 8 blocks ...passed
00:09:18.163    Test: blockdev write read size > 128k ...passed
00:09:18.163    Test: blockdev write read invalid size ...passed
00:09:18.163    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:18.163    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:18.163    Test: blockdev write read max offset ...passed
00:09:18.163    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:18.163    Test: blockdev writev readv 8 blocks ...passed
00:09:18.163    Test: blockdev writev readv 30 x 1block ...passed
00:09:18.421    Test: blockdev writev readv block ...passed
00:09:18.421    Test: blockdev writev readv size > 128k ...passed
00:09:18.421    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:18.421    Test: blockdev comparev and writev ...[2024-12-09 03:59:46.763890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.421  [2024-12-09 03:59:46.763925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:09:18.421  [2024-12-09 03:59:46.763949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.422  [2024-12-09 03:59:46.763966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.764289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.422  [2024-12-09 03:59:46.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.764335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.422  [2024-12-09 03:59:46.764351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.764647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.422  [2024-12-09 03:59:46.764671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.764707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.422  [2024-12-09 03:59:46.764724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.765031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.422  [2024-12-09 03:59:46.765054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.765075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:09:18.422  [2024-12-09 03:59:46.765091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:09:18.422  passed
00:09:18.422    Test: blockdev nvme passthru rw ...passed
00:09:18.422    Test: blockdev nvme passthru vendor specific ...[2024-12-09 03:59:46.847513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:09:18.422  [2024-12-09 03:59:46.847539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.847676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:09:18.422  [2024-12-09 03:59:46.847699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.847828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:09:18.422  [2024-12-09 03:59:46.847850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:09:18.422  [2024-12-09 03:59:46.847989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:09:18.422  [2024-12-09 03:59:46.848011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:09:18.422  passed
00:09:18.422    Test: blockdev nvme admin passthru ...passed
00:09:18.422    Test: blockdev copy ...passed
00:09:18.422  
00:09:18.422  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:09:18.422                suites      1      1    n/a      0        0
00:09:18.422                 tests     23     23     23      0        0
00:09:18.422               asserts    152    152    152      0      n/a
00:09:18.422  
00:09:18.422  Elapsed time =    0.964 seconds
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:09:18.681  rmmod nvme_tcp
00:09:18.681  rmmod nvme_fabrics
00:09:18.681  rmmod nvme_keyring
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 162511 ']'
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 162511
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 162511 ']'
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 162511
00:09:18.681    03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:18.681    03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162511
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162511'
00:09:18.681  killing process with pid 162511
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 162511
00:09:18.681   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 162511
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:18.941   03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:18.941    03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:21.485   03:59:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:09:21.485  
00:09:21.485  real	0m6.471s
00:09:21.485  user	0m9.965s
00:09:21.485  sys	0m2.180s
00:09:21.485   03:59:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:21.485   03:59:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:09:21.485  ************************************
00:09:21.485  END TEST nvmf_bdevio
00:09:21.485  ************************************
00:09:21.485   03:59:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:09:21.485  
00:09:21.485  real	3m57.375s
00:09:21.486  user	10m21.336s
00:09:21.486  sys	1m6.085s
00:09:21.486   03:59:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:21.486   03:59:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:09:21.486  ************************************
00:09:21.486  END TEST nvmf_target_core
00:09:21.486  ************************************
00:09:21.486   03:59:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp
00:09:21.486   03:59:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:21.486   03:59:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:21.486   03:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:21.486  ************************************
00:09:21.486  START TEST nvmf_target_extra
00:09:21.486  ************************************
00:09:21.486   03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp
00:09:21.486  * Looking for test storage...
00:09:21.486  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-:
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-:
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:21.486  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.486  		--rc genhtml_branch_coverage=1
00:09:21.486  		--rc genhtml_function_coverage=1
00:09:21.486  		--rc genhtml_legend=1
00:09:21.486  		--rc geninfo_all_blocks=1
00:09:21.486  		--rc geninfo_unexecuted_blocks=1
00:09:21.486  		
00:09:21.486  		'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:21.486  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.486  		--rc genhtml_branch_coverage=1
00:09:21.486  		--rc genhtml_function_coverage=1
00:09:21.486  		--rc genhtml_legend=1
00:09:21.486  		--rc geninfo_all_blocks=1
00:09:21.486  		--rc geninfo_unexecuted_blocks=1
00:09:21.486  		
00:09:21.486  		'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:21.486  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.486  		--rc genhtml_branch_coverage=1
00:09:21.486  		--rc genhtml_function_coverage=1
00:09:21.486  		--rc genhtml_legend=1
00:09:21.486  		--rc geninfo_all_blocks=1
00:09:21.486  		--rc geninfo_unexecuted_blocks=1
00:09:21.486  		
00:09:21.486  		'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:21.486  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.486  		--rc genhtml_branch_coverage=1
00:09:21.486  		--rc genhtml_function_coverage=1
00:09:21.486  		--rc genhtml_legend=1
00:09:21.486  		--rc geninfo_all_blocks=1
00:09:21.486  		--rc geninfo_unexecuted_blocks=1
00:09:21.486  		
00:09:21.486  		'
00:09:21.486   03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:21.486     03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:21.486      03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.486      03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.486      03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.486      03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH
00:09:21.486      03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:21.486  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:21.486    03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:21.486   03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:09:21.486   03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@")
00:09:21.487   03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]]
00:09:21.487   03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:09:21.487   03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:21.487   03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:21.487   03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:09:21.487  ************************************
00:09:21.487  START TEST nvmf_example
00:09:21.487  ************************************
00:09:21.487   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:09:21.487  * Looking for test storage...
00:09:21.487  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-:
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-:
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:21.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.487  		--rc genhtml_branch_coverage=1
00:09:21.487  		--rc genhtml_function_coverage=1
00:09:21.487  		--rc genhtml_legend=1
00:09:21.487  		--rc geninfo_all_blocks=1
00:09:21.487  		--rc geninfo_unexecuted_blocks=1
00:09:21.487  		
00:09:21.487  		'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:21.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.487  		--rc genhtml_branch_coverage=1
00:09:21.487  		--rc genhtml_function_coverage=1
00:09:21.487  		--rc genhtml_legend=1
00:09:21.487  		--rc geninfo_all_blocks=1
00:09:21.487  		--rc geninfo_unexecuted_blocks=1
00:09:21.487  		
00:09:21.487  		'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:21.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.487  		--rc genhtml_branch_coverage=1
00:09:21.487  		--rc genhtml_function_coverage=1
00:09:21.487  		--rc genhtml_legend=1
00:09:21.487  		--rc geninfo_all_blocks=1
00:09:21.487  		--rc geninfo_unexecuted_blocks=1
00:09:21.487  		
00:09:21.487  		'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:21.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.487  		--rc genhtml_branch_coverage=1
00:09:21.487  		--rc genhtml_function_coverage=1
00:09:21.487  		--rc genhtml_legend=1
00:09:21.487  		--rc geninfo_all_blocks=1
00:09:21.487  		--rc geninfo_unexecuted_blocks=1
00:09:21.487  		
00:09:21.487  		'
00:09:21.487   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:21.487     03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:21.487      03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.487      03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.487      03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.487      03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH
00:09:21.487      03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:21.487    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:21.488    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:21.488    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:21.488  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:21.488    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:21.488    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:21.488    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf")
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']'
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000)
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}")
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:21.488    03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable
00:09:21.488   03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=()
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=()
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=()
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=()
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=()
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=()
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=()
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:09:24.027  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:09:24.027  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:09:24.027  Found net devices under 0000:0a:00.0: cvl_0_0
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:09:24.027  Found net devices under 0000:0a:00.1: cvl_0_1
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:09:24.027   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:09:24.028  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:09:24.028  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms
00:09:24.028  
00:09:24.028  --- 10.0.0.2 ping statistics ---
00:09:24.028  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:24.028  rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:09:24.028  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:09:24.028  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms
00:09:24.028  
00:09:24.028  --- 10.0.0.1 ping statistics ---
00:09:24.028  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:24.028  rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}")
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=164698
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 164698
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 164698 ']'
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:24.028  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:24.028   03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.960    03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512
00:09:24.960    03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.960    03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.960    03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 '
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:09:24.960   03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:09:37.157  Initializing NVMe Controllers
00:09:37.158  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:09:37.158  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:09:37.158  Initialization complete. Launching workers.
00:09:37.158  ========================================================
00:09:37.158                                                                                                               Latency(us)
00:09:37.158  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:09:37.158  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   14832.88      57.94    4314.57     896.47   15943.16
00:09:37.158  ========================================================
00:09:37.158  Total                                                                    :   14832.88      57.94    4314.57     896.47   15943.16
00:09:37.158  
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:09:37.158  rmmod nvme_tcp
00:09:37.158  rmmod nvme_fabrics
00:09:37.158  rmmod nvme_keyring
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 164698 ']'
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 164698
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 164698 ']'
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 164698
00:09:37.158    04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:37.158    04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164698
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']'
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164698'
00:09:37.158  killing process with pid 164698
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 164698
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 164698
00:09:37.158  nvmf threads initialize successfully
00:09:37.158  bdev subsystem init successfully
00:09:37.158  created a nvmf target service
00:09:37.158  create targets's poll groups done
00:09:37.158  all subsystems of target started
00:09:37.158  nvmf target is running
00:09:37.158  all subsystems of target stopped
00:09:37.158  destroy targets's poll groups done
00:09:37.158  destroyed the nvmf target service
00:09:37.158  bdev subsystem finish successfully
00:09:37.158  nvmf threads destroy successfully
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:37.158   04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:37.158    04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:37.726  
00:09:37.726  real	0m16.344s
00:09:37.726  user	0m46.061s
00:09:37.726  sys	0m3.362s
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:09:37.726  ************************************
00:09:37.726  END TEST nvmf_example
00:09:37.726  ************************************
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:09:37.726  ************************************
00:09:37.726  START TEST nvmf_filesystem
00:09:37.726  ************************************
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:09:37.726  * Looking for test storage...
00:09:37.726  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:37.726      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.726  		--rc genhtml_branch_coverage=1
00:09:37.726  		--rc genhtml_function_coverage=1
00:09:37.726  		--rc genhtml_legend=1
00:09:37.726  		--rc geninfo_all_blocks=1
00:09:37.726  		--rc geninfo_unexecuted_blocks=1
00:09:37.726  		
00:09:37.726  		'
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.726  		--rc genhtml_branch_coverage=1
00:09:37.726  		--rc genhtml_function_coverage=1
00:09:37.726  		--rc genhtml_legend=1
00:09:37.726  		--rc geninfo_all_blocks=1
00:09:37.726  		--rc geninfo_unexecuted_blocks=1
00:09:37.726  		
00:09:37.726  		'
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.726  		--rc genhtml_branch_coverage=1
00:09:37.726  		--rc genhtml_function_coverage=1
00:09:37.726  		--rc genhtml_legend=1
00:09:37.726  		--rc geninfo_all_blocks=1
00:09:37.726  		--rc geninfo_unexecuted_blocks=1
00:09:37.726  		
00:09:37.726  		'
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.726  		--rc genhtml_branch_coverage=1
00:09:37.726  		--rc genhtml_function_coverage=1
00:09:37.726  		--rc genhtml_legend=1
00:09:37.726  		--rc geninfo_all_blocks=1
00:09:37.726  		--rc geninfo_unexecuted_blocks=1
00:09:37.726  		
00:09:37.726  		'
00:09:37.726   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']'
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]]
00:09:37.726    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:09:37.726     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n
00:09:37.727    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh
00:09:37.727       04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh
00:09:37.727      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]]
00:09:37.727     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:09:37.727  #define SPDK_CONFIG_H
00:09:37.727  #define SPDK_CONFIG_AIO_FSDEV 1
00:09:37.727  #define SPDK_CONFIG_APPS 1
00:09:37.727  #define SPDK_CONFIG_ARCH native
00:09:37.727  #undef SPDK_CONFIG_ASAN
00:09:37.727  #undef SPDK_CONFIG_AVAHI
00:09:37.727  #undef SPDK_CONFIG_CET
00:09:37.727  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:09:37.727  #define SPDK_CONFIG_COVERAGE 1
00:09:37.727  #define SPDK_CONFIG_CROSS_PREFIX 
00:09:37.727  #undef SPDK_CONFIG_CRYPTO
00:09:37.727  #undef SPDK_CONFIG_CRYPTO_MLX5
00:09:37.727  #undef SPDK_CONFIG_CUSTOMOCF
00:09:37.727  #undef SPDK_CONFIG_DAOS
00:09:37.727  #define SPDK_CONFIG_DAOS_DIR 
00:09:37.727  #define SPDK_CONFIG_DEBUG 1
00:09:37.727  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:09:37.727  #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:09:37.727  #define SPDK_CONFIG_DPDK_INC_DIR 
00:09:37.727  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:09:37.727  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:09:37.727  #undef SPDK_CONFIG_DPDK_UADK
00:09:37.727  #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk
00:09:37.727  #define SPDK_CONFIG_EXAMPLES 1
00:09:37.727  #undef SPDK_CONFIG_FC
00:09:37.727  #define SPDK_CONFIG_FC_PATH 
00:09:37.727  #define SPDK_CONFIG_FIO_PLUGIN 1
00:09:37.727  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:09:37.727  #define SPDK_CONFIG_FSDEV 1
00:09:37.727  #undef SPDK_CONFIG_FUSE
00:09:37.727  #undef SPDK_CONFIG_FUZZER
00:09:37.727  #define SPDK_CONFIG_FUZZER_LIB 
00:09:37.727  #undef SPDK_CONFIG_GOLANG
00:09:37.727  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:09:37.727  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:09:37.727  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:09:37.727  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:09:37.727  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:09:37.727  #undef SPDK_CONFIG_HAVE_LIBBSD
00:09:37.727  #undef SPDK_CONFIG_HAVE_LZ4
00:09:37.727  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:09:37.728  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:09:37.728  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:09:37.728  #define SPDK_CONFIG_IDXD 1
00:09:37.728  #define SPDK_CONFIG_IDXD_KERNEL 1
00:09:37.728  #undef SPDK_CONFIG_IPSEC_MB
00:09:37.728  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:09:37.728  #define SPDK_CONFIG_ISAL 1
00:09:37.728  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:09:37.728  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:09:37.728  #define SPDK_CONFIG_LIBDIR 
00:09:37.728  #undef SPDK_CONFIG_LTO
00:09:37.728  #define SPDK_CONFIG_MAX_LCORES 128
00:09:37.728  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:09:37.728  #define SPDK_CONFIG_NVME_CUSE 1
00:09:37.728  #undef SPDK_CONFIG_OCF
00:09:37.728  #define SPDK_CONFIG_OCF_PATH 
00:09:37.728  #define SPDK_CONFIG_OPENSSL_PATH 
00:09:37.728  #undef SPDK_CONFIG_PGO_CAPTURE
00:09:37.728  #define SPDK_CONFIG_PGO_DIR 
00:09:37.728  #undef SPDK_CONFIG_PGO_USE
00:09:37.728  #define SPDK_CONFIG_PREFIX /usr/local
00:09:37.728  #undef SPDK_CONFIG_RAID5F
00:09:37.728  #undef SPDK_CONFIG_RBD
00:09:37.728  #define SPDK_CONFIG_RDMA 1
00:09:37.728  #define SPDK_CONFIG_RDMA_PROV verbs
00:09:37.728  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:09:37.728  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:09:37.728  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:09:37.728  #define SPDK_CONFIG_SHARED 1
00:09:37.728  #undef SPDK_CONFIG_SMA
00:09:37.728  #define SPDK_CONFIG_TESTS 1
00:09:37.728  #undef SPDK_CONFIG_TSAN
00:09:37.728  #define SPDK_CONFIG_UBLK 1
00:09:37.728  #define SPDK_CONFIG_UBSAN 1
00:09:37.728  #undef SPDK_CONFIG_UNIT_TESTS
00:09:37.728  #undef SPDK_CONFIG_URING
00:09:37.728  #define SPDK_CONFIG_URING_PATH 
00:09:37.728  #undef SPDK_CONFIG_URING_ZNS
00:09:37.728  #undef SPDK_CONFIG_USDT
00:09:37.728  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:09:37.728  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:09:37.728  #define SPDK_CONFIG_VFIO_USER 1
00:09:37.728  #define SPDK_CONFIG_VFIO_USER_DIR 
00:09:37.728  #define SPDK_CONFIG_VHOST 1
00:09:37.728  #define SPDK_CONFIG_VIRTIO 1
00:09:37.728  #undef SPDK_CONFIG_VTUNE
00:09:37.728  #define SPDK_CONFIG_VTUNE_DIR 
00:09:37.728  #define SPDK_CONFIG_WERROR 1
00:09:37.728  #define SPDK_CONFIG_WPDK_DIR 
00:09:37.728  #undef SPDK_CONFIG_XNVME
00:09:37.728  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common
00:09:37.728       04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power
00:09:37.728      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]=
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E'
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]]
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]]
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]]
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp)
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm)
00:09:37.728     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]]
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # :
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:09:37.728    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # :
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # :
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:09:37.729    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # :
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:09:37.991    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV=
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt=
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind=
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind=
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE=
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@"
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 166632 ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 166632
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ikf8J8
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ikf8J8/tests/target /tmp/spdk.ikf8J8
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=56177328128
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5811200000
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993952768
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:09:37.992  * Looking for test storage...
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=56177328128
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8025792512
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:37.992  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]]
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:09:37.992    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:09:37.992     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:37.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.993  		--rc genhtml_branch_coverage=1
00:09:37.993  		--rc genhtml_function_coverage=1
00:09:37.993  		--rc genhtml_legend=1
00:09:37.993  		--rc geninfo_all_blocks=1
00:09:37.993  		--rc geninfo_unexecuted_blocks=1
00:09:37.993  		
00:09:37.993  		'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:37.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.993  		--rc genhtml_branch_coverage=1
00:09:37.993  		--rc genhtml_function_coverage=1
00:09:37.993  		--rc genhtml_legend=1
00:09:37.993  		--rc geninfo_all_blocks=1
00:09:37.993  		--rc geninfo_unexecuted_blocks=1
00:09:37.993  		
00:09:37.993  		'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:37.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.993  		--rc genhtml_branch_coverage=1
00:09:37.993  		--rc genhtml_function_coverage=1
00:09:37.993  		--rc genhtml_legend=1
00:09:37.993  		--rc geninfo_all_blocks=1
00:09:37.993  		--rc geninfo_unexecuted_blocks=1
00:09:37.993  		
00:09:37.993  		'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:37.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.993  		--rc genhtml_branch_coverage=1
00:09:37.993  		--rc genhtml_function_coverage=1
00:09:37.993  		--rc genhtml_legend=1
00:09:37.993  		--rc geninfo_all_blocks=1
00:09:37.993  		--rc geninfo_unexecuted_blocks=1
00:09:37.993  		
00:09:37.993  		'
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:37.993     04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:37.993      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.993      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.993      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.993      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:09:37.993      04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:37.993  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:37.993    04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable
00:09:37.993   04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=()
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=()
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=()
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=()
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=()
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:09:40.527   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:09:40.528  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:09:40.528  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:09:40.528  Found net devices under 0000:0a:00.0: cvl_0_0
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:09:40.528  Found net devices under 0000:0a:00.1: cvl_0_1
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:09:40.528  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:09:40.528  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms
00:09:40.528  
00:09:40.528  --- 10.0.0.2 ping statistics ---
00:09:40.528  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:40.528  rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:09:40.528  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:09:40.528  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms
00:09:40.528  
00:09:40.528  --- 10.0.0.1 ping statistics ---
00:09:40.528  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:40.528  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:09:40.528  ************************************
00:09:40.528  START TEST nvmf_filesystem_no_in_capsule
00:09:40.528  ************************************
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0
00:09:40.528   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=168739
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 168739
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 168739 ']'
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:40.529  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:40.529   04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.529  [2024-12-09 04:00:08.858945] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:09:40.529  [2024-12-09 04:00:08.859015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:40.529  [2024-12-09 04:00:08.936186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:40.529  [2024-12-09 04:00:08.996183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:40.529  [2024-12-09 04:00:08.996236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:40.529  [2024-12-09 04:00:08.996264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:40.529  [2024-12-09 04:00:08.996283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:40.529  [2024-12-09 04:00:08.996294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:40.529  [2024-12-09 04:00:08.997809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:40.529  [2024-12-09 04:00:08.997837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:40.529  [2024-12-09 04:00:08.997896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:40.529  [2024-12-09 04:00:08.997899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.786  [2024-12-09 04:00:09.139997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.786  Malloc1
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:40.786   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.787  [2024-12-09 04:00:09.332858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:09:40.787   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:40.787    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:09:40.787    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:09:40.787    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:09:40.787    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:09:40.787    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:09:40.787     04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:09:40.787     04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:40.787     04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:40.787     04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:40.787    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:09:40.787  {
00:09:40.787  "name": "Malloc1",
00:09:40.787  "aliases": [
00:09:40.787  "690ef8a0-6331-4ce3-9424-a00e4af070cc"
00:09:40.787  ],
00:09:40.787  "product_name": "Malloc disk",
00:09:40.787  "block_size": 512,
00:09:40.787  "num_blocks": 1048576,
00:09:40.787  "uuid": "690ef8a0-6331-4ce3-9424-a00e4af070cc",
00:09:40.787  "assigned_rate_limits": {
00:09:40.787  "rw_ios_per_sec": 0,
00:09:40.787  "rw_mbytes_per_sec": 0,
00:09:40.787  "r_mbytes_per_sec": 0,
00:09:40.787  "w_mbytes_per_sec": 0
00:09:40.787  },
00:09:40.787  "claimed": true,
00:09:40.787  "claim_type": "exclusive_write",
00:09:40.787  "zoned": false,
00:09:40.787  "supported_io_types": {
00:09:40.787  "read": true,
00:09:40.787  "write": true,
00:09:40.787  "unmap": true,
00:09:40.787  "flush": true,
00:09:40.787  "reset": true,
00:09:40.787  "nvme_admin": false,
00:09:40.787  "nvme_io": false,
00:09:40.787  "nvme_io_md": false,
00:09:40.787  "write_zeroes": true,
00:09:40.787  "zcopy": true,
00:09:40.787  "get_zone_info": false,
00:09:40.787  "zone_management": false,
00:09:40.787  "zone_append": false,
00:09:40.787  "compare": false,
00:09:40.787  "compare_and_write": false,
00:09:40.787  "abort": true,
00:09:40.787  "seek_hole": false,
00:09:40.787  "seek_data": false,
00:09:40.787  "copy": true,
00:09:40.787  "nvme_iov_md": false
00:09:40.787  },
00:09:40.787  "memory_domains": [
00:09:40.787  {
00:09:40.787  "dma_device_id": "system",
00:09:40.787  "dma_device_type": 1
00:09:40.787  },
00:09:40.787  {
00:09:40.787  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:40.787  "dma_device_type": 2
00:09:40.787  }
00:09:40.787  ],
00:09:40.787  "driver_specific": {}
00:09:40.787  }
00:09:40.787  ]'
00:09:40.787     04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:09:41.044    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:09:41.044     04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:09:41.044    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:09:41.044    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:09:41.045    04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:09:41.045   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:09:41.045   04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:09:41.610   04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:09:41.610   04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:09:41.610   04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:09:41.610   04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:09:41.610   04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:09:44.137    04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:09:44.137   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:09:44.395   04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']'
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:45.783  ************************************
00:09:45.783  START TEST filesystem_ext4
00:09:45.783  ************************************
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:09:45.783   04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:09:45.783  mke2fs 1.47.0 (5-Feb-2023)
00:09:45.783  Discarding device blocks:      0/522240             done                            
00:09:45.783  Creating filesystem with 522240 1k blocks and 130560 inodes
00:09:45.783  Filesystem UUID: 0a73c6fd-23c5-491d-946d-7999aaf48e7f
00:09:45.783  Superblock backups stored on blocks: 
00:09:45.783  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:09:45.783  
00:09:45.783  Allocating group tables:  0/64     done                            
00:09:45.783  Writing inode tables:  0/64     done                            
00:09:46.346  Creating journal (8192 blocks): done
00:09:47.276  Writing superblocks and filesystem accounting information:  0/64     done
00:09:47.276  
00:09:47.276   04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0
00:09:47.276   04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 168739
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:09:53.830  
00:09:53.830  real	0m7.952s
00:09:53.830  user	0m0.015s
00:09:53.830  sys	0m0.093s
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x
00:09:53.830  ************************************
00:09:53.830  END TEST filesystem_ext4
00:09:53.830  ************************************
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:53.830  ************************************
00:09:53.830  START TEST filesystem_btrfs
00:09:53.830  ************************************
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:09:53.830   04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:09:53.830  btrfs-progs v6.8.1
00:09:53.830  See https://btrfs.readthedocs.io for more information.
00:09:53.830  
00:09:53.830  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:09:53.830  NOTE: several default settings have changed in version 5.15, please make sure
00:09:53.830        this does not affect your deployments:
00:09:53.830        - DUP for metadata (-m dup)
00:09:53.830        - enabled no-holes (-O no-holes)
00:09:53.830        - enabled free-space-tree (-R free-space-tree)
00:09:53.830  
00:09:53.830  Label:              (null)
00:09:53.830  UUID:               9c23c8bc-bb9f-4c59-916f-60423f704582
00:09:53.830  Node size:          16384
00:09:53.830  Sector size:        4096	(CPU page size: 4096)
00:09:53.830  Filesystem size:    510.00MiB
00:09:53.830  Block group profiles:
00:09:53.830    Data:             single            8.00MiB
00:09:53.830    Metadata:         DUP              32.00MiB
00:09:53.830    System:           DUP               8.00MiB
00:09:53.830  SSD detected:       yes
00:09:53.830  Zoned device:       no
00:09:53.830  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:09:53.830  Checksum:           crc32c
00:09:53.830  Number of devices:  1
00:09:53.830  Devices:
00:09:53.830     ID        SIZE  PATH          
00:09:53.830      1   510.00MiB  /dev/nvme0n1p1
00:09:53.830  
00:09:53.830   04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0
00:09:53.830   04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:09:54.766   04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:09:54.766   04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync
00:09:54.766   04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 168739
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:09:54.766  
00:09:54.766  real	0m1.095s
00:09:54.766  user	0m0.014s
00:09:54.766  sys	0m0.129s
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:54.766   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x
00:09:54.766  ************************************
00:09:54.766  END TEST filesystem_btrfs
00:09:54.767  ************************************
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:54.767  ************************************
00:09:54.767  START TEST filesystem_xfs
00:09:54.767  ************************************
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f
00:09:54.767   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:09:54.767  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:09:54.767           =                       sectsz=512   attr=2, projid32bit=1
00:09:54.767           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:09:54.767           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:09:54.767  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:09:54.767           =                       sunit=0      swidth=0 blks
00:09:54.767  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:09:54.767  log      =internal log           bsize=4096   blocks=16384, version=2
00:09:54.767           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:09:54.767  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:09:55.701  Discarding blocks...Done.
00:09:55.701   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0
00:09:55.701   04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 168739
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:09:59.004  
00:09:59.004  real	0m3.860s
00:09:59.004  user	0m0.018s
00:09:59.004  sys	0m0.089s
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x
00:09:59.004  ************************************
00:09:59.004  END TEST filesystem_xfs
00:09:59.004  ************************************
00:09:59.004   04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:09:59.004  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 168739
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 168739 ']'
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 168739
00:09:59.004    04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:59.004    04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 168739
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 168739'
00:09:59.004  killing process with pid 168739
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 168739
00:09:59.004   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 168739
00:09:59.262   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:09:59.262  
00:09:59.262  real	0m18.978s
00:09:59.262  user	1m13.589s
00:09:59.262  sys	0m2.297s
00:09:59.262   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:59.262   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.262  ************************************
00:09:59.262  END TEST nvmf_filesystem_no_in_capsule
00:09:59.262  ************************************
00:09:59.262   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096
00:09:59.262   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:59.262   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:59.262   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:09:59.553  ************************************
00:09:59.553  START TEST nvmf_filesystem_in_capsule
00:09:59.553  ************************************
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=171269
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 171269
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 171269 ']'
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:59.553  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:59.553   04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.553  [2024-12-09 04:00:27.896692] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:09:59.553  [2024-12-09 04:00:27.896805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:59.553  [2024-12-09 04:00:27.972164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:59.553  [2024-12-09 04:00:28.032506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:59.553  [2024-12-09 04:00:28.032567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:59.553  [2024-12-09 04:00:28.032596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:59.553  [2024-12-09 04:00:28.032607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:59.553  [2024-12-09 04:00:28.032617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:59.553  [2024-12-09 04:00:28.034067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:59.553  [2024-12-09 04:00:28.034127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:59.553  [2024-12-09 04:00:28.034150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:59.553  [2024-12-09 04:00:28.034157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.811  [2024-12-09 04:00:28.179847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.811  Malloc1
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.811  [2024-12-09 04:00:28.359846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:09:59.811   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:59.811    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:09:59.811    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:09:59.811    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:09:59.811    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:09:59.811    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:09:59.811     04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:09:59.811     04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:59.811     04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:09:59.811     04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:59.811    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:09:59.811  {
00:09:59.811  "name": "Malloc1",
00:09:59.811  "aliases": [
00:09:59.811  "69911007-2c1a-4d99-9fe8-055371d313d7"
00:09:59.811  ],
00:09:59.811  "product_name": "Malloc disk",
00:09:59.811  "block_size": 512,
00:09:59.811  "num_blocks": 1048576,
00:09:59.811  "uuid": "69911007-2c1a-4d99-9fe8-055371d313d7",
00:09:59.811  "assigned_rate_limits": {
00:09:59.811  "rw_ios_per_sec": 0,
00:09:59.811  "rw_mbytes_per_sec": 0,
00:09:59.811  "r_mbytes_per_sec": 0,
00:09:59.811  "w_mbytes_per_sec": 0
00:09:59.811  },
00:09:59.811  "claimed": true,
00:09:59.811  "claim_type": "exclusive_write",
00:09:59.811  "zoned": false,
00:09:59.811  "supported_io_types": {
00:09:59.811  "read": true,
00:09:59.811  "write": true,
00:09:59.811  "unmap": true,
00:09:59.811  "flush": true,
00:09:59.811  "reset": true,
00:09:59.811  "nvme_admin": false,
00:09:59.811  "nvme_io": false,
00:09:59.811  "nvme_io_md": false,
00:09:59.811  "write_zeroes": true,
00:09:59.811  "zcopy": true,
00:09:59.811  "get_zone_info": false,
00:09:59.811  "zone_management": false,
00:09:59.811  "zone_append": false,
00:09:59.811  "compare": false,
00:09:59.811  "compare_and_write": false,
00:09:59.811  "abort": true,
00:09:59.811  "seek_hole": false,
00:09:59.811  "seek_data": false,
00:09:59.811  "copy": true,
00:09:59.811  "nvme_iov_md": false
00:09:59.811  },
00:09:59.811  "memory_domains": [
00:09:59.811  {
00:09:59.812  "dma_device_id": "system",
00:09:59.812  "dma_device_type": 1
00:09:59.812  },
00:09:59.812  {
00:09:59.812  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:59.812  "dma_device_type": 2
00:09:59.812  }
00:09:59.812  ],
00:09:59.812  "driver_specific": {}
00:09:59.812  }
00:09:59.812  ]'
00:09:59.812     04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:10:00.069    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:10:00.069     04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:10:00.069    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:10:00.069    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:10:00.069    04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:10:00.069   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:10:00.069   04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:10:00.635   04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:10:00.635   04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:10:00.636   04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:10:00.636   04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:10:00.636   04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:10:02.532   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:10:02.532   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:10:02.532   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:10:02.532   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:10:02.532   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:10:02.532    04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:10:02.532   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:10:02.532   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:10:02.790   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:10:02.790   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:10:02.790   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:10:03.355   04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']'
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:10:04.288  ************************************
00:10:04.288  START TEST filesystem_in_capsule_ext4
00:10:04.288  ************************************
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:10:04.288   04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:10:04.288  mke2fs 1.47.0 (5-Feb-2023)
00:10:04.546  Discarding device blocks:      0/522240             done                            
00:10:04.546  Creating filesystem with 522240 1k blocks and 130560 inodes
00:10:04.546  Filesystem UUID: 987be2e3-9211-403e-b0ee-e3ae0e56528a
00:10:04.546  Superblock backups stored on blocks: 
00:10:04.546  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:10:04.546  
00:10:04.546  Allocating group tables:  0/64     done                            
00:10:04.546  Writing inode tables:  0/64     done                            
00:10:04.804  Creating journal (8192 blocks): done
00:10:05.887  Writing superblocks and filesystem accounting information:  0/64 1/64     done
00:10:05.887  
00:10:05.888   04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0
00:10:05.888   04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:10:12.442   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:10:12.442   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync
00:10:12.442   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:10:12.442   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync
00:10:12.442   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 171269
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:10:12.443  
00:10:12.443  real	0m7.077s
00:10:12.443  user	0m0.019s
00:10:12.443  sys	0m0.067s
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x
00:10:12.443  ************************************
00:10:12.443  END TEST filesystem_in_capsule_ext4
00:10:12.443  ************************************
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:10:12.443  ************************************
00:10:12.443  START TEST filesystem_in_capsule_btrfs
00:10:12.443  ************************************
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:10:12.443   04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:10:12.443  btrfs-progs v6.8.1
00:10:12.443  See https://btrfs.readthedocs.io for more information.
00:10:12.443  
00:10:12.443  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:10:12.443  NOTE: several default settings have changed in version 5.15, please make sure
00:10:12.443        this does not affect your deployments:
00:10:12.443        - DUP for metadata (-m dup)
00:10:12.443        - enabled no-holes (-O no-holes)
00:10:12.443        - enabled free-space-tree (-R free-space-tree)
00:10:12.443  
00:10:12.443  Label:              (null)
00:10:12.443  UUID:               72304c39-1933-4684-95bf-64fb13a0d1c3
00:10:12.443  Node size:          16384
00:10:12.443  Sector size:        4096	(CPU page size: 4096)
00:10:12.443  Filesystem size:    510.00MiB
00:10:12.443  Block group profiles:
00:10:12.443    Data:             single            8.00MiB
00:10:12.443    Metadata:         DUP              32.00MiB
00:10:12.443    System:           DUP               8.00MiB
00:10:12.443  SSD detected:       yes
00:10:12.443  Zoned device:       no
00:10:12.443  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:10:12.443  Checksum:           crc32c
00:10:12.443  Number of devices:  1
00:10:12.443  Devices:
00:10:12.443     ID        SIZE  PATH          
00:10:12.443      1   510.00MiB  /dev/nvme0n1p1
00:10:12.443  
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 171269
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:10:12.443  
00:10:12.443  real	0m0.528s
00:10:12.443  user	0m0.015s
00:10:12.443  sys	0m0.100s
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x
00:10:12.443  ************************************
00:10:12.443  END TEST filesystem_in_capsule_btrfs
00:10:12.443  ************************************
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:10:12.443  ************************************
00:10:12.443  START TEST filesystem_in_capsule_xfs
00:10:12.443  ************************************
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f
00:10:12.443   04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:10:12.443  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:10:12.443           =                       sectsz=512   attr=2, projid32bit=1
00:10:12.443           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:10:12.443           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:10:12.443  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:10:12.443           =                       sunit=0      swidth=0 blks
00:10:12.443  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:10:12.443  log      =internal log           bsize=4096   blocks=16384, version=2
00:10:12.443           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:10:12.443  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:10:13.378  Discarding blocks...Done.
00:10:13.378   04:00:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0
00:10:13.378   04:00:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 171269
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:10:15.910  
00:10:15.910  real	0m3.562s
00:10:15.910  user	0m0.023s
00:10:15.910  sys	0m0.051s
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x
00:10:15.910  ************************************
00:10:15.910  END TEST filesystem_in_capsule_xfs
00:10:15.910  ************************************
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync
00:10:15.910   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:10:16.168  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 171269
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 171269 ']'
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 171269
00:10:16.168    04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:16.168    04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171269
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171269'
00:10:16.168  killing process with pid 171269
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 171269
00:10:16.168   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 171269
00:10:16.428   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:10:16.428  
00:10:16.428  real	0m17.158s
00:10:16.428  user	1m6.453s
00:10:16.428  sys	0m2.127s
00:10:16.428   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:16.428   04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:10:16.428  ************************************
00:10:16.428  END TEST nvmf_filesystem_in_capsule
00:10:16.428  ************************************
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:16.705  rmmod nvme_tcp
00:10:16.705  rmmod nvme_fabrics
00:10:16.705  rmmod nvme_keyring
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:16.705   04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:16.705    04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:10:18.612  
00:10:18.612  real	0m41.014s
00:10:18.612  user	2m21.124s
00:10:18.612  sys	0m6.218s
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:10:18.612  ************************************
00:10:18.612  END TEST nvmf_filesystem
00:10:18.612  ************************************
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:10:18.612  ************************************
00:10:18.612  START TEST nvmf_target_discovery
00:10:18.612  ************************************
00:10:18.612   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:10:18.874  * Looking for test storage...
00:10:18.874  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:18.874  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:18.874  		--rc genhtml_branch_coverage=1
00:10:18.874  		--rc genhtml_function_coverage=1
00:10:18.874  		--rc genhtml_legend=1
00:10:18.874  		--rc geninfo_all_blocks=1
00:10:18.874  		--rc geninfo_unexecuted_blocks=1
00:10:18.874  		
00:10:18.874  		'
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:18.874  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:18.874  		--rc genhtml_branch_coverage=1
00:10:18.874  		--rc genhtml_function_coverage=1
00:10:18.874  		--rc genhtml_legend=1
00:10:18.874  		--rc geninfo_all_blocks=1
00:10:18.874  		--rc geninfo_unexecuted_blocks=1
00:10:18.874  		
00:10:18.874  		'
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:18.874  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:18.874  		--rc genhtml_branch_coverage=1
00:10:18.874  		--rc genhtml_function_coverage=1
00:10:18.874  		--rc genhtml_legend=1
00:10:18.874  		--rc geninfo_all_blocks=1
00:10:18.874  		--rc geninfo_unexecuted_blocks=1
00:10:18.874  		
00:10:18.874  		'
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:18.874  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:18.874  		--rc genhtml_branch_coverage=1
00:10:18.874  		--rc genhtml_function_coverage=1
00:10:18.874  		--rc genhtml_legend=1
00:10:18.874  		--rc geninfo_all_blocks=1
00:10:18.874  		--rc geninfo_unexecuted_blocks=1
00:10:18.874  		
00:10:18.874  		'
00:10:18.874   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:18.874    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:18.874     04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:18.874      04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:18.875      04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:18.875      04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:18.875      04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH
00:10:18.875      04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:18.875  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:18.875    04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable
00:10:18.875   04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=()
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=()
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=()
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=()
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=()
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:10:21.415  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:10:21.415  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:10:21.415  Found net devices under 0000:0a:00.0: cvl_0_0
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:10:21.415  Found net devices under 0000:0a:00.1: cvl_0_1
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes
00:10:21.415   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:10:21.416  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:21.416  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms
00:10:21.416  
00:10:21.416  --- 10.0.0.2 ping statistics ---
00:10:21.416  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:21.416  rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:10:21.416  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:21.416  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms
00:10:21.416  
00:10:21.416  --- 10.0.0.1 ping statistics ---
00:10:21.416  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:21.416  rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=175433
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 175433
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 175433 ']'
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:21.416  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:21.416   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.416  [2024-12-09 04:00:49.746362] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:10:21.416  [2024-12-09 04:00:49.746443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:21.416  [2024-12-09 04:00:49.814685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:21.416  [2024-12-09 04:00:49.869200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:21.416  [2024-12-09 04:00:49.869258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:21.416  [2024-12-09 04:00:49.869293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:21.416  [2024-12-09 04:00:49.869305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:21.416  [2024-12-09 04:00:49.869314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:21.416  [2024-12-09 04:00:49.870897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:21.416  [2024-12-09 04:00:49.871004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:21.416  [2024-12-09 04:00:49.871083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:21.416  [2024-12-09 04:00:49.871086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:21.674   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:21.674   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0
00:10:21.674   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:21.674   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:21.674   04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674  [2024-12-09 04:00:50.020838] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674  Null1
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674  [2024-12-09 04:00:50.076477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674  Null2
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674  Null3
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674  Null4
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.674   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.675   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430
00:10:21.675   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.675   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.675   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.675   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420
00:10:21.932  
00:10:21.932  Discovery Log Number of Records 6, Generation counter 6
00:10:21.932  =====Discovery Log Entry 0======
00:10:21.932  trtype:  tcp
00:10:21.932  adrfam:  ipv4
00:10:21.932  subtype: current discovery subsystem
00:10:21.932  treq:    not required
00:10:21.932  portid:  0
00:10:21.932  trsvcid: 4420
00:10:21.932  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:10:21.932  traddr:  10.0.0.2
00:10:21.932  eflags:  explicit discovery connections, duplicate discovery information
00:10:21.932  sectype: none
00:10:21.932  =====Discovery Log Entry 1======
00:10:21.932  trtype:  tcp
00:10:21.932  adrfam:  ipv4
00:10:21.932  subtype: nvme subsystem
00:10:21.932  treq:    not required
00:10:21.932  portid:  0
00:10:21.932  trsvcid: 4420
00:10:21.932  subnqn:  nqn.2016-06.io.spdk:cnode1
00:10:21.932  traddr:  10.0.0.2
00:10:21.932  eflags:  none
00:10:21.932  sectype: none
00:10:21.932  =====Discovery Log Entry 2======
00:10:21.932  trtype:  tcp
00:10:21.932  adrfam:  ipv4
00:10:21.932  subtype: nvme subsystem
00:10:21.932  treq:    not required
00:10:21.932  portid:  0
00:10:21.932  trsvcid: 4420
00:10:21.932  subnqn:  nqn.2016-06.io.spdk:cnode2
00:10:21.932  traddr:  10.0.0.2
00:10:21.932  eflags:  none
00:10:21.932  sectype: none
00:10:21.932  =====Discovery Log Entry 3======
00:10:21.932  trtype:  tcp
00:10:21.932  adrfam:  ipv4
00:10:21.932  subtype: nvme subsystem
00:10:21.932  treq:    not required
00:10:21.932  portid:  0
00:10:21.932  trsvcid: 4420
00:10:21.932  subnqn:  nqn.2016-06.io.spdk:cnode3
00:10:21.932  traddr:  10.0.0.2
00:10:21.932  eflags:  none
00:10:21.932  sectype: none
00:10:21.932  =====Discovery Log Entry 4======
00:10:21.932  trtype:  tcp
00:10:21.932  adrfam:  ipv4
00:10:21.932  subtype: nvme subsystem
00:10:21.932  treq:    not required
00:10:21.932  portid:  0
00:10:21.932  trsvcid: 4420
00:10:21.932  subnqn:  nqn.2016-06.io.spdk:cnode4
00:10:21.932  traddr:  10.0.0.2
00:10:21.932  eflags:  none
00:10:21.932  sectype: none
00:10:21.932  =====Discovery Log Entry 5======
00:10:21.932  trtype:  tcp
00:10:21.932  adrfam:  ipv4
00:10:21.932  subtype: discovery subsystem referral
00:10:21.932  treq:    not required
00:10:21.932  portid:  0
00:10:21.932  trsvcid: 4430
00:10:21.932  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:10:21.932  traddr:  10.0.0.2
00:10:21.932  eflags:  none
00:10:21.932  sectype: none
00:10:21.932   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC'
00:10:21.932  Perform nvmf subsystem discovery via RPC
00:10:21.932   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems
00:10:21.932   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.932   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.932  [
00:10:21.932  {
00:10:21.932  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:10:21.932  "subtype": "Discovery",
00:10:21.932  "listen_addresses": [
00:10:21.932  {
00:10:21.932  "trtype": "TCP",
00:10:21.932  "adrfam": "IPv4",
00:10:21.932  "traddr": "10.0.0.2",
00:10:21.932  "trsvcid": "4420"
00:10:21.932  }
00:10:21.932  ],
00:10:21.932  "allow_any_host": true,
00:10:21.932  "hosts": []
00:10:21.932  },
00:10:21.932  {
00:10:21.932  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:10:21.932  "subtype": "NVMe",
00:10:21.932  "listen_addresses": [
00:10:21.932  {
00:10:21.932  "trtype": "TCP",
00:10:21.932  "adrfam": "IPv4",
00:10:21.932  "traddr": "10.0.0.2",
00:10:21.932  "trsvcid": "4420"
00:10:21.932  }
00:10:21.932  ],
00:10:21.932  "allow_any_host": true,
00:10:21.932  "hosts": [],
00:10:21.932  "serial_number": "SPDK00000000000001",
00:10:21.932  "model_number": "SPDK bdev Controller",
00:10:21.932  "max_namespaces": 32,
00:10:21.932  "min_cntlid": 1,
00:10:21.932  "max_cntlid": 65519,
00:10:21.932  "namespaces": [
00:10:21.932  {
00:10:21.932  "nsid": 1,
00:10:21.932  "bdev_name": "Null1",
00:10:21.932  "name": "Null1",
00:10:21.932  "nguid": "59B1B72CF6084BECB8FE1895461A5878",
00:10:21.932  "uuid": "59b1b72c-f608-4bec-b8fe-1895461a5878"
00:10:21.932  }
00:10:21.932  ]
00:10:21.932  },
00:10:21.932  {
00:10:21.932  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:10:21.932  "subtype": "NVMe",
00:10:21.932  "listen_addresses": [
00:10:21.932  {
00:10:21.932  "trtype": "TCP",
00:10:21.932  "adrfam": "IPv4",
00:10:21.932  "traddr": "10.0.0.2",
00:10:21.932  "trsvcid": "4420"
00:10:21.932  }
00:10:21.932  ],
00:10:21.932  "allow_any_host": true,
00:10:21.932  "hosts": [],
00:10:21.932  "serial_number": "SPDK00000000000002",
00:10:21.932  "model_number": "SPDK bdev Controller",
00:10:21.932  "max_namespaces": 32,
00:10:21.932  "min_cntlid": 1,
00:10:21.932  "max_cntlid": 65519,
00:10:21.932  "namespaces": [
00:10:21.932  {
00:10:21.932  "nsid": 1,
00:10:21.932  "bdev_name": "Null2",
00:10:21.932  "name": "Null2",
00:10:21.932  "nguid": "13B3F901C7B94C9B9B999792654FD38A",
00:10:21.932  "uuid": "13b3f901-c7b9-4c9b-9b99-9792654fd38a"
00:10:21.932  }
00:10:21.932  ]
00:10:21.932  },
00:10:21.932  {
00:10:21.932  "nqn": "nqn.2016-06.io.spdk:cnode3",
00:10:21.932  "subtype": "NVMe",
00:10:21.932  "listen_addresses": [
00:10:21.932  {
00:10:21.932  "trtype": "TCP",
00:10:21.932  "adrfam": "IPv4",
00:10:21.932  "traddr": "10.0.0.2",
00:10:21.932  "trsvcid": "4420"
00:10:21.932  }
00:10:21.932  ],
00:10:21.932  "allow_any_host": true,
00:10:21.932  "hosts": [],
00:10:21.932  "serial_number": "SPDK00000000000003",
00:10:21.932  "model_number": "SPDK bdev Controller",
00:10:21.932  "max_namespaces": 32,
00:10:21.932  "min_cntlid": 1,
00:10:21.932  "max_cntlid": 65519,
00:10:21.932  "namespaces": [
00:10:21.932  {
00:10:21.932  "nsid": 1,
00:10:21.932  "bdev_name": "Null3",
00:10:21.932  "name": "Null3",
00:10:21.932  "nguid": "0FC8C464AF99483A93989EDAAE782308",
00:10:21.932  "uuid": "0fc8c464-af99-483a-9398-9edaae782308"
00:10:21.932  }
00:10:21.932  ]
00:10:21.932  },
00:10:21.932  {
00:10:21.932  "nqn": "nqn.2016-06.io.spdk:cnode4",
00:10:21.932  "subtype": "NVMe",
00:10:21.932  "listen_addresses": [
00:10:21.932  {
00:10:21.932  "trtype": "TCP",
00:10:21.933  "adrfam": "IPv4",
00:10:21.933  "traddr": "10.0.0.2",
00:10:21.933  "trsvcid": "4420"
00:10:21.933  }
00:10:21.933  ],
00:10:21.933  "allow_any_host": true,
00:10:21.933  "hosts": [],
00:10:21.933  "serial_number": "SPDK00000000000004",
00:10:21.933  "model_number": "SPDK bdev Controller",
00:10:21.933  "max_namespaces": 32,
00:10:21.933  "min_cntlid": 1,
00:10:21.933  "max_cntlid": 65519,
00:10:21.933  "namespaces": [
00:10:21.933  {
00:10:21.933  "nsid": 1,
00:10:21.933  "bdev_name": "Null4",
00:10:21.933  "name": "Null4",
00:10:21.933  "nguid": "D0113796ACA144E180DC7291A3991365",
00:10:21.933  "uuid": "d0113796-aca1-44e1-80dc-7291a3991365"
00:10:21.933  }
00:10:21.933  ]
00:10:21.933  }
00:10:21.933  ]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.933    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs
00:10:21.933    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name'
00:10:21.933    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.933    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:21.933    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs=
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']'
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:22.190   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:22.190  rmmod nvme_tcp
00:10:22.190  rmmod nvme_fabrics
00:10:22.191  rmmod nvme_keyring
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 175433 ']'
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 175433
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 175433 ']'
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 175433
00:10:22.191    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:22.191    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175433
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175433'
00:10:22.191  killing process with pid 175433
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 175433
00:10:22.191   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 175433
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:22.451   04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:22.451    04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:24.358   04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:10:24.358  
00:10:24.358  real	0m5.705s
00:10:24.358  user	0m4.742s
00:10:24.358  sys	0m2.021s
00:10:24.358   04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:24.358   04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:10:24.358  ************************************
00:10:24.358  END TEST nvmf_target_discovery
00:10:24.358  ************************************
00:10:24.358   04:00:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:10:24.358   04:00:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:24.358   04:00:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:24.358   04:00:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:10:24.616  ************************************
00:10:24.616  START TEST nvmf_referrals
00:10:24.616  ************************************
00:10:24.616   04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:10:24.616  * Looking for test storage...
00:10:24.616  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:10:24.616    04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:24.616     04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version
00:10:24.616     04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:24.616    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-:
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-:
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:24.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.617  		--rc genhtml_branch_coverage=1
00:10:24.617  		--rc genhtml_function_coverage=1
00:10:24.617  		--rc genhtml_legend=1
00:10:24.617  		--rc geninfo_all_blocks=1
00:10:24.617  		--rc geninfo_unexecuted_blocks=1
00:10:24.617  		
00:10:24.617  		'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:24.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.617  		--rc genhtml_branch_coverage=1
00:10:24.617  		--rc genhtml_function_coverage=1
00:10:24.617  		--rc genhtml_legend=1
00:10:24.617  		--rc geninfo_all_blocks=1
00:10:24.617  		--rc geninfo_unexecuted_blocks=1
00:10:24.617  		
00:10:24.617  		'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:24.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.617  		--rc genhtml_branch_coverage=1
00:10:24.617  		--rc genhtml_function_coverage=1
00:10:24.617  		--rc genhtml_legend=1
00:10:24.617  		--rc geninfo_all_blocks=1
00:10:24.617  		--rc geninfo_unexecuted_blocks=1
00:10:24.617  		
00:10:24.617  		'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:24.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.617  		--rc genhtml_branch_coverage=1
00:10:24.617  		--rc genhtml_function_coverage=1
00:10:24.617  		--rc genhtml_legend=1
00:10:24.617  		--rc geninfo_all_blocks=1
00:10:24.617  		--rc geninfo_unexecuted_blocks=1
00:10:24.617  		
00:10:24.617  		'
00:10:24.617   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:24.617     04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:24.617      04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:24.617      04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:24.617      04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:24.617      04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH
00:10:24.617      04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:24.617  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:24.617    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:24.617   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2
00:10:24.617   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3
00:10:24.617   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4
00:10:24.617   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:24.618    04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable
00:10:24.618   04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=()
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=()
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=()
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=()
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=()
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:10:27.155  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:10:27.155  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:10:27.155  Found net devices under 0000:0a:00.0: cvl_0_0
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:10:27.155  Found net devices under 0000:0a:00.1: cvl_0_1
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:10:27.155  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:27.155  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms
00:10:27.155  
00:10:27.155  --- 10.0.0.2 ping statistics ---
00:10:27.155  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:27.155  rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:10:27.155  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:27.155  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms
00:10:27.155  
00:10:27.155  --- 10.0.0.1 ping statistics ---
00:10:27.155  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:27.155  rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms
00:10:27.155   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=177538
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 177538
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 177538 ']'
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:27.156  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156  [2024-12-09 04:00:55.379626] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:10:27.156  [2024-12-09 04:00:55.379732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:27.156  [2024-12-09 04:00:55.457000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:27.156  [2024-12-09 04:00:55.517221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:27.156  [2024-12-09 04:00:55.517308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:27.156  [2024-12-09 04:00:55.517324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:27.156  [2024-12-09 04:00:55.517335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:27.156  [2024-12-09 04:00:55.517345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:27.156  [2024-12-09 04:00:55.519036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:27.156  [2024-12-09 04:00:55.519115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:27.156  [2024-12-09 04:00:55.519093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:27.156  [2024-12-09 04:00:55.519118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156  [2024-12-09 04:00:55.673879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156  [2024-12-09 04:00:55.698476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.156   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.156    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals
00:10:27.156    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length
00:10:27.156    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.156    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.414   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 ))
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:10:27.414   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:10:27.414     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:10:27.414    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:10:27.414   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.415    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals
00:10:27.415    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.415    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length
00:10:27.415    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.415    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.415   04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 ))
00:10:27.672    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme
00:10:27.672    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:10:27.672    04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:10:27.672     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:27.672     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:10:27.672     04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:10:27.672    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]]
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.672   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.672    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc
00:10:27.672    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:10:27.672     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:10:27.672     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.672     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:10:27.672     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:27.672     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:10:27.672     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2
00:10:27.930   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:10:27.930     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:27.930     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:10:27.930     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2
00:10:27.930   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem'
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:27.930    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:10:28.187   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:10:28.187    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral'
00:10:28.187    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn
00:10:28.187    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:10:28.187    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:28.187    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:10:28.445   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:10:28.445   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:10:28.445   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.445   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:28.445   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.445    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc
00:10:28.445    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.445    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2
00:10:28.445   04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:10:28.445    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme
00:10:28.445    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:10:28.445    04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:10:28.445     04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2
00:10:28.703   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem'
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:10:28.703   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]]
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral'
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:28.703    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:10:28.960   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:10:28.960   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery
00:10:28.960   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.960   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:28.960   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.960    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals
00:10:28.960    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length
00:10:28.960    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.960    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:28.960    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.961   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 ))
00:10:28.961    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme
00:10:28.961    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:10:28.961    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:10:28.961     04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json
00:10:28.961     04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:10:28.961     04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:10:29.219    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]]
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:29.219  rmmod nvme_tcp
00:10:29.219  rmmod nvme_fabrics
00:10:29.219  rmmod nvme_keyring
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 177538 ']'
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 177538
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 177538 ']'
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 177538
00:10:29.219    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:29.219    04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 177538
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 177538'
00:10:29.219  killing process with pid 177538
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 177538
00:10:29.219   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 177538
00:10:29.479   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:29.479   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:29.479   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:29.479   04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr
00:10:29.479   04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save
00:10:29.479   04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:29.479   04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore
00:10:29.479   04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:29.479   04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns
00:10:29.479   04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:29.479   04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:29.479    04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:10:32.015  
00:10:32.015  real	0m7.112s
00:10:32.015  user	0m11.409s
00:10:32.015  sys	0m2.264s
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:10:32.015  ************************************
00:10:32.015  END TEST nvmf_referrals
00:10:32.015  ************************************
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:10:32.015  ************************************
00:10:32.015  START TEST nvmf_connect_disconnect
00:10:32.015  ************************************
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:10:32.015  * Looking for test storage...
00:10:32.015  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-:
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-:
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<'
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:32.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:32.015  		--rc genhtml_branch_coverage=1
00:10:32.015  		--rc genhtml_function_coverage=1
00:10:32.015  		--rc genhtml_legend=1
00:10:32.015  		--rc geninfo_all_blocks=1
00:10:32.015  		--rc geninfo_unexecuted_blocks=1
00:10:32.015  		
00:10:32.015  		'
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:32.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:32.015  		--rc genhtml_branch_coverage=1
00:10:32.015  		--rc genhtml_function_coverage=1
00:10:32.015  		--rc genhtml_legend=1
00:10:32.015  		--rc geninfo_all_blocks=1
00:10:32.015  		--rc geninfo_unexecuted_blocks=1
00:10:32.015  		
00:10:32.015  		'
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:32.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:32.015  		--rc genhtml_branch_coverage=1
00:10:32.015  		--rc genhtml_function_coverage=1
00:10:32.015  		--rc genhtml_legend=1
00:10:32.015  		--rc geninfo_all_blocks=1
00:10:32.015  		--rc geninfo_unexecuted_blocks=1
00:10:32.015  		
00:10:32.015  		'
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:32.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:32.015  		--rc genhtml_branch_coverage=1
00:10:32.015  		--rc genhtml_function_coverage=1
00:10:32.015  		--rc genhtml_legend=1
00:10:32.015  		--rc geninfo_all_blocks=1
00:10:32.015  		--rc geninfo_unexecuted_blocks=1
00:10:32.015  		
00:10:32.015  		'
00:10:32.015   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:32.015     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:32.015    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:32.016     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:32.016     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob
00:10:32.016     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:32.016     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:32.016     04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:32.016      04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:32.016      04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:32.016      04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:32.016      04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH
00:10:32.016      04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:32.016  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:32.016    04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable
00:10:32.016   04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=()
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=()
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=()
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=()
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=()
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:10:33.919  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:10:33.919  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:10:33.919  Found net devices under 0000:0a:00.0: cvl_0_0
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:10:33.919  Found net devices under 0000:0a:00.1: cvl_0_1
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:10:33.919   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:10:34.178   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:10:34.178   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:10:34.178   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:10:34.178   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:10:34.178   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:10:34.178   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:10:34.179  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:34.179  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms
00:10:34.179  
00:10:34.179  --- 10.0.0.2 ping statistics ---
00:10:34.179  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:34.179  rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:10:34.179  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:34.179  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms
00:10:34.179  
00:10:34.179  --- 10.0.0.1 ping statistics ---
00:10:34.179  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:34.179  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=179855
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 179855
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 179855 ']'
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:34.179  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:34.179   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.179  [2024-12-09 04:01:02.656760] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:10:34.179  [2024-12-09 04:01:02.656844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:34.179  [2024-12-09 04:01:02.726976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:34.438  [2024-12-09 04:01:02.783739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:34.438  [2024-12-09 04:01:02.783802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:34.438  [2024-12-09 04:01:02.783816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:34.438  [2024-12-09 04:01:02.783841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:34.438  [2024-12-09 04:01:02.783850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:34.438  [2024-12-09 04:01:02.785230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:34.438  [2024-12-09 04:01:02.785346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:34.438  [2024-12-09 04:01:02.785373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:34.438  [2024-12-09 04:01:02.785376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.438  [2024-12-09 04:01:02.935335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:34.438    04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512
00:10:34.438    04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:34.438    04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.438    04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:34.438   04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:34.438  [2024-12-09 04:01:02.997841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:10:34.438   04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:34.438   04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']'
00:10:34.438   04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5
00:10:34.438   04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x
00:10:37.721  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:40.251  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:42.797  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:45.679  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:48.482  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:48.482  rmmod nvme_tcp
00:10:48.482  rmmod nvme_fabrics
00:10:48.482  rmmod nvme_keyring
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 179855 ']'
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 179855
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 179855 ']'
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 179855
00:10:48.482    04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:48.482    04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179855
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179855'
00:10:48.482  killing process with pid 179855
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 179855
00:10:48.482   04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 179855
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns
00:10:48.754   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:48.755   04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:48.755    04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:50.775   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:10:50.775  
00:10:50.775  real	0m19.059s
00:10:50.775  user	0m56.929s
00:10:50.775  sys	0m3.449s
00:10:50.775   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:50.775   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:10:50.775  ************************************
00:10:50.775  END TEST nvmf_connect_disconnect
00:10:50.775  ************************************
00:10:50.775   04:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:10:50.775   04:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:50.775   04:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:50.776   04:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:10:50.776  ************************************
00:10:50.776  START TEST nvmf_multitarget
00:10:50.776  ************************************
00:10:50.776   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:10:50.776  * Looking for test storage...
00:10:50.776  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:10:50.776    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:50.776     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version
00:10:50.776     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-:
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-:
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<'
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:51.100     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:51.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:51.100  		--rc genhtml_branch_coverage=1
00:10:51.100  		--rc genhtml_function_coverage=1
00:10:51.100  		--rc genhtml_legend=1
00:10:51.100  		--rc geninfo_all_blocks=1
00:10:51.100  		--rc geninfo_unexecuted_blocks=1
00:10:51.100  		
00:10:51.100  		'
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:51.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:51.100  		--rc genhtml_branch_coverage=1
00:10:51.100  		--rc genhtml_function_coverage=1
00:10:51.100  		--rc genhtml_legend=1
00:10:51.100  		--rc geninfo_all_blocks=1
00:10:51.100  		--rc geninfo_unexecuted_blocks=1
00:10:51.100  		
00:10:51.100  		'
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:51.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:51.100  		--rc genhtml_branch_coverage=1
00:10:51.100  		--rc genhtml_function_coverage=1
00:10:51.100  		--rc genhtml_legend=1
00:10:51.100  		--rc geninfo_all_blocks=1
00:10:51.100  		--rc geninfo_unexecuted_blocks=1
00:10:51.100  		
00:10:51.100  		'
00:10:51.100    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:51.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:51.100  		--rc genhtml_branch_coverage=1
00:10:51.101  		--rc genhtml_function_coverage=1
00:10:51.101  		--rc genhtml_legend=1
00:10:51.101  		--rc geninfo_all_blocks=1
00:10:51.101  		--rc geninfo_unexecuted_blocks=1
00:10:51.101  		
00:10:51.101  		'
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:51.101     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:51.101     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:51.101     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob
00:10:51.101     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:51.101     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:51.101     04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:51.101      04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:51.101      04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:51.101      04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:51.101      04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH
00:10:51.101      04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:51.101  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:51.101    04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable
00:10:51.101   04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=()
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=()
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=()
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=()
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=()
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:10:53.230  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:10:53.230  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:53.230   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:10:53.231  Found net devices under 0000:0a:00.0: cvl_0_0
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:10:53.231  Found net devices under 0000:0a:00.1: cvl_0_1
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:10:53.231  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:53.231  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms
00:10:53.231  
00:10:53.231  --- 10.0.0.2 ping statistics ---
00:10:53.231  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:53.231  rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:10:53.231  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:53.231  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms
00:10:53.231  
00:10:53.231  --- 10.0.0.1 ping statistics ---
00:10:53.231  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:53.231  rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=183669
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 183669
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 183669 ']'
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:53.231  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:53.231   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:10:53.231  [2024-12-09 04:01:21.726665] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:10:53.231  [2024-12-09 04:01:21.726762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:53.231  [2024-12-09 04:01:21.799923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:53.510  [2024-12-09 04:01:21.861287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:53.510  [2024-12-09 04:01:21.861361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:53.510  [2024-12-09 04:01:21.861389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:53.510  [2024-12-09 04:01:21.861401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:53.510  [2024-12-09 04:01:21.861411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:53.510  [2024-12-09 04:01:21.863020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:53.510  [2024-12-09 04:01:21.863085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:53.510  [2024-12-09 04:01:21.863150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:53.510  [2024-12-09 04:01:21.863154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:53.510   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:53.510   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0
00:10:53.510   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:53.510   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:53.510   04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:10:53.511   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:53.511   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:10:53.511    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:10:53.511    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length
00:10:53.774   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']'
00:10:53.774   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32
00:10:53.774  "nvmf_tgt_1"
00:10:53.774   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32
00:10:54.032  "nvmf_tgt_2"
00:10:54.032    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:10:54.032    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length
00:10:54.032   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']'
00:10:54.032   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1
00:10:54.032  true
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2
00:10:54.289  true
00:10:54.289    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:10:54.289    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']'
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:54.289   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:54.289  rmmod nvme_tcp
00:10:54.289  rmmod nvme_fabrics
00:10:54.547  rmmod nvme_keyring
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 183669 ']'
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 183669
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 183669 ']'
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 183669
00:10:54.547    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:54.547    04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183669
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183669'
00:10:54.547  killing process with pid 183669
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 183669
00:10:54.547   04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 183669
00:10:54.807   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:54.808   04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:54.808    04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:10:56.717  
00:10:56.717  real	0m6.001s
00:10:56.717  user	0m6.864s
00:10:56.717  sys	0m2.052s
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:10:56.717  ************************************
00:10:56.717  END TEST nvmf_multitarget
00:10:56.717  ************************************
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:10:56.717  ************************************
00:10:56.717  START TEST nvmf_rpc
00:10:56.717  ************************************
00:10:56.717   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:10:56.977  * Looking for test storage...
00:10:56.977  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:56.977     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:56.977  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:56.977  		--rc genhtml_branch_coverage=1
00:10:56.977  		--rc genhtml_function_coverage=1
00:10:56.977  		--rc genhtml_legend=1
00:10:56.977  		--rc geninfo_all_blocks=1
00:10:56.977  		--rc geninfo_unexecuted_blocks=1
00:10:56.977  		
00:10:56.977  		'
00:10:56.977    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:56.977  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:56.977  		--rc genhtml_branch_coverage=1
00:10:56.977  		--rc genhtml_function_coverage=1
00:10:56.977  		--rc genhtml_legend=1
00:10:56.977  		--rc geninfo_all_blocks=1
00:10:56.977  		--rc geninfo_unexecuted_blocks=1
00:10:56.977  		
00:10:56.978  		'
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:56.978  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:56.978  		--rc genhtml_branch_coverage=1
00:10:56.978  		--rc genhtml_function_coverage=1
00:10:56.978  		--rc genhtml_legend=1
00:10:56.978  		--rc geninfo_all_blocks=1
00:10:56.978  		--rc geninfo_unexecuted_blocks=1
00:10:56.978  		
00:10:56.978  		'
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:56.978  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:56.978  		--rc genhtml_branch_coverage=1
00:10:56.978  		--rc genhtml_function_coverage=1
00:10:56.978  		--rc genhtml_legend=1
00:10:56.978  		--rc geninfo_all_blocks=1
00:10:56.978  		--rc geninfo_unexecuted_blocks=1
00:10:56.978  		
00:10:56.978  		'
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:56.978     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:56.978     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:56.978     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob
00:10:56.978     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:56.978     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:56.978     04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:56.978      04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:56.978      04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:56.978      04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:56.978      04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH
00:10:56.978      04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:56.978  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:56.978    04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable
00:10:56.978   04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=()
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=()
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=()
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=()
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=()
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:10:59.519  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:10:59.519  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:59.519   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:10:59.520  Found net devices under 0000:0a:00.0: cvl_0_0
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:10:59.520  Found net devices under 0000:0a:00.1: cvl_0_1
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:10:59.520  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:59.520  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms
00:10:59.520  
00:10:59.520  --- 10.0.0.2 ping statistics ---
00:10:59.520  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:59.520  rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:10:59.520  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:59.520  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms
00:10:59.520  
00:10:59.520  --- 10.0.0.1 ping statistics ---
00:10:59.520  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:59.520  rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=185786
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 185786
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 185786 ']'
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:59.520  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:59.520   04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.520  [2024-12-09 04:01:27.768520] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:10:59.520  [2024-12-09 04:01:27.768625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:59.520  [2024-12-09 04:01:27.842687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:59.520  [2024-12-09 04:01:27.900020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:59.520  [2024-12-09 04:01:27.900090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:59.520  [2024-12-09 04:01:27.900103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:59.520  [2024-12-09 04:01:27.900129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:59.520  [2024-12-09 04:01:27.900139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:59.520  [2024-12-09 04:01:27.901748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:59.520  [2024-12-09 04:01:27.901827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:59.520  [2024-12-09 04:01:27.901772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:59.520  [2024-12-09 04:01:27.901831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:59.520   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:59.520   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:59.520   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:59.520   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:59.520   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.520   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:59.520    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats
00:10:59.520    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.520    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.520    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.520   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{
00:10:59.520  "tick_rate": 2700000000,
00:10:59.520  "poll_groups": [
00:10:59.520  {
00:10:59.521  "name": "nvmf_tgt_poll_group_000",
00:10:59.521  "admin_qpairs": 0,
00:10:59.521  "io_qpairs": 0,
00:10:59.521  "current_admin_qpairs": 0,
00:10:59.521  "current_io_qpairs": 0,
00:10:59.521  "pending_bdev_io": 0,
00:10:59.521  "completed_nvme_io": 0,
00:10:59.521  "transports": []
00:10:59.521  },
00:10:59.521  {
00:10:59.521  "name": "nvmf_tgt_poll_group_001",
00:10:59.521  "admin_qpairs": 0,
00:10:59.521  "io_qpairs": 0,
00:10:59.521  "current_admin_qpairs": 0,
00:10:59.521  "current_io_qpairs": 0,
00:10:59.521  "pending_bdev_io": 0,
00:10:59.521  "completed_nvme_io": 0,
00:10:59.521  "transports": []
00:10:59.521  },
00:10:59.521  {
00:10:59.521  "name": "nvmf_tgt_poll_group_002",
00:10:59.521  "admin_qpairs": 0,
00:10:59.521  "io_qpairs": 0,
00:10:59.521  "current_admin_qpairs": 0,
00:10:59.521  "current_io_qpairs": 0,
00:10:59.521  "pending_bdev_io": 0,
00:10:59.521  "completed_nvme_io": 0,
00:10:59.521  "transports": []
00:10:59.521  },
00:10:59.521  {
00:10:59.521  "name": "nvmf_tgt_poll_group_003",
00:10:59.521  "admin_qpairs": 0,
00:10:59.521  "io_qpairs": 0,
00:10:59.521  "current_admin_qpairs": 0,
00:10:59.521  "current_io_qpairs": 0,
00:10:59.521  "pending_bdev_io": 0,
00:10:59.521  "completed_nvme_io": 0,
00:10:59.521  "transports": []
00:10:59.521  }
00:10:59.521  ]
00:10:59.521  }'
00:10:59.521    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name'
00:10:59.521    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name'
00:10:59.521    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name'
00:10:59.521    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l
00:10:59.779   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 ))
00:10:59.779    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]'
00:10:59.779   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]]
00:10:59.779   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:10:59.779   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.779   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.779  [2024-12-09 04:01:28.151587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:59.779   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.779    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats
00:10:59.779    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.779    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.779    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.779   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{
00:10:59.779  "tick_rate": 2700000000,
00:10:59.779  "poll_groups": [
00:10:59.779  {
00:10:59.779  "name": "nvmf_tgt_poll_group_000",
00:10:59.779  "admin_qpairs": 0,
00:10:59.779  "io_qpairs": 0,
00:10:59.779  "current_admin_qpairs": 0,
00:10:59.779  "current_io_qpairs": 0,
00:10:59.779  "pending_bdev_io": 0,
00:10:59.779  "completed_nvme_io": 0,
00:10:59.779  "transports": [
00:10:59.779  {
00:10:59.779  "trtype": "TCP"
00:10:59.779  }
00:10:59.779  ]
00:10:59.779  },
00:10:59.779  {
00:10:59.779  "name": "nvmf_tgt_poll_group_001",
00:10:59.779  "admin_qpairs": 0,
00:10:59.779  "io_qpairs": 0,
00:10:59.779  "current_admin_qpairs": 0,
00:10:59.779  "current_io_qpairs": 0,
00:10:59.779  "pending_bdev_io": 0,
00:10:59.779  "completed_nvme_io": 0,
00:10:59.779  "transports": [
00:10:59.779  {
00:10:59.779  "trtype": "TCP"
00:10:59.779  }
00:10:59.779  ]
00:10:59.779  },
00:10:59.779  {
00:10:59.779  "name": "nvmf_tgt_poll_group_002",
00:10:59.779  "admin_qpairs": 0,
00:10:59.779  "io_qpairs": 0,
00:10:59.779  "current_admin_qpairs": 0,
00:10:59.779  "current_io_qpairs": 0,
00:10:59.780  "pending_bdev_io": 0,
00:10:59.780  "completed_nvme_io": 0,
00:10:59.780  "transports": [
00:10:59.780  {
00:10:59.780  "trtype": "TCP"
00:10:59.780  }
00:10:59.780  ]
00:10:59.780  },
00:10:59.780  {
00:10:59.780  "name": "nvmf_tgt_poll_group_003",
00:10:59.780  "admin_qpairs": 0,
00:10:59.780  "io_qpairs": 0,
00:10:59.780  "current_admin_qpairs": 0,
00:10:59.780  "current_io_qpairs": 0,
00:10:59.780  "pending_bdev_io": 0,
00:10:59.780  "completed_nvme_io": 0,
00:10:59.780  "transports": [
00:10:59.780  {
00:10:59.780  "trtype": "TCP"
00:10:59.780  }
00:10:59.780  ]
00:10:59.780  }
00:10:59.780  ]
00:10:59.780  }'
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs'
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 ))
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs'
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 ))
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']'
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.780  Malloc1
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:59.780  [2024-12-09 04:01:28.323350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:59.780    04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:10:59.780   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420
00:10:59.780  [2024-12-09 04:01:28.346068] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55'
00:11:00.037  Failed to write to /dev/nvme-fabrics: Input/output error
00:11:00.037  could not add new controller: failed to write to nvme-fabrics device
00:11:00.037   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:11:00.037   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:00.038   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:11:00.038   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:00.038   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:11:00.038   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.038   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:00.038   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.038   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:00.602   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME
00:11:00.602   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:11:00.602   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:00.602   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:00.602   04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:11:02.498   04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:02.498    04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:02.498    04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:02.498   04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:02.498   04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:02.498   04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:11:02.498   04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:02.498  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:02.498   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:02.498   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:11:02.498   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:02.498   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:02.756    04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:02.756    04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:02.756  [2024-12-09 04:01:31.125714] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55'
00:11:02.756  Failed to write to /dev/nvme-fabrics: Input/output error
00:11:02.756  could not add new controller: failed to write to nvme-fabrics device
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:02.756   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:03.323   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME
00:11:03.323   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:11:03.323   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:03.323   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:03.323   04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:05.852    04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:05.852    04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:05.852  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:05.852    04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:05.852  [2024-12-09 04:01:33.956496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:05.852   04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:06.110   04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:06.110   04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:11:06.110   04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:06.110   04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:06.110   04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:08.645    04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:08.645    04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:08.645  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:08.645  [2024-12-09 04:01:36.794847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:08.645   04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:09.211   04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:09.211   04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:11:09.211   04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:09.211   04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:09.211   04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:11.106    04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:11.106    04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:11.106  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:11.106  [2024-12-09 04:01:39.628203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.106   04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:12.038   04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:12.038   04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:11:12.038   04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:12.038   04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:12.038   04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:13.939    04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:13.939    04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:13.939  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:13.939  [2024-12-09 04:01:42.446373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:13.939   04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:14.505   04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:14.505   04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:11:14.505   04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:14.505   04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:14.505   04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:17.030    04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:17.030    04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:17.030  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.030  [2024-12-09 04:01:45.217969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.030   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:17.031   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:17.287   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:17.287   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:11:17.287   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:17.287   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:17.287   04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:19.819    04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:19.819    04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:19.819  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.819    04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.819  [2024-12-09 04:01:47.979047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:19.819   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:19.820   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820  [2024-12-09 04:01:48.027115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820  [2024-12-09 04:01:48.075285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820  [2024-12-09 04:01:48.123473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.820   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821  [2024-12-09 04:01:48.171655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{
00:11:19.821  "tick_rate": 2700000000,
00:11:19.821  "poll_groups": [
00:11:19.821  {
00:11:19.821  "name": "nvmf_tgt_poll_group_000",
00:11:19.821  "admin_qpairs": 2,
00:11:19.821  "io_qpairs": 84,
00:11:19.821  "current_admin_qpairs": 0,
00:11:19.821  "current_io_qpairs": 0,
00:11:19.821  "pending_bdev_io": 0,
00:11:19.821  "completed_nvme_io": 183,
00:11:19.821  "transports": [
00:11:19.821  {
00:11:19.821  "trtype": "TCP"
00:11:19.821  }
00:11:19.821  ]
00:11:19.821  },
00:11:19.821  {
00:11:19.821  "name": "nvmf_tgt_poll_group_001",
00:11:19.821  "admin_qpairs": 2,
00:11:19.821  "io_qpairs": 84,
00:11:19.821  "current_admin_qpairs": 0,
00:11:19.821  "current_io_qpairs": 0,
00:11:19.821  "pending_bdev_io": 0,
00:11:19.821  "completed_nvme_io": 156,
00:11:19.821  "transports": [
00:11:19.821  {
00:11:19.821  "trtype": "TCP"
00:11:19.821  }
00:11:19.821  ]
00:11:19.821  },
00:11:19.821  {
00:11:19.821  "name": "nvmf_tgt_poll_group_002",
00:11:19.821  "admin_qpairs": 1,
00:11:19.821  "io_qpairs": 84,
00:11:19.821  "current_admin_qpairs": 0,
00:11:19.821  "current_io_qpairs": 0,
00:11:19.821  "pending_bdev_io": 0,
00:11:19.821  "completed_nvme_io": 163,
00:11:19.821  "transports": [
00:11:19.821  {
00:11:19.821  "trtype": "TCP"
00:11:19.821  }
00:11:19.821  ]
00:11:19.821  },
00:11:19.821  {
00:11:19.821  "name": "nvmf_tgt_poll_group_003",
00:11:19.821  "admin_qpairs": 2,
00:11:19.821  "io_qpairs": 84,
00:11:19.821  "current_admin_qpairs": 0,
00:11:19.821  "current_io_qpairs": 0,
00:11:19.821  "pending_bdev_io": 0,
00:11:19.821  "completed_nvme_io": 184,
00:11:19.821  "transports": [
00:11:19.821  {
00:11:19.821  "trtype": "TCP"
00:11:19.821  }
00:11:19.821  ]
00:11:19.821  }
00:11:19.821  ]
00:11:19.821  }'
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs'
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 ))
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs'
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:11:19.821    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 ))
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']'
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:19.821   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:19.822  rmmod nvme_tcp
00:11:19.822  rmmod nvme_fabrics
00:11:19.822  rmmod nvme_keyring
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 185786 ']'
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 185786
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 185786 ']'
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 185786
00:11:19.822    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname
00:11:19.822   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:19.822    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185786
00:11:20.080   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:20.080   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:20.080   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185786'
00:11:20.080  killing process with pid 185786
00:11:20.080   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 185786
00:11:20.080   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 185786
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:20.341   04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:20.341    04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:22.244  
00:11:22.244  real	0m25.446s
00:11:22.244  user	1m22.240s
00:11:22.244  sys	0m4.446s
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:22.244  ************************************
00:11:22.244  END TEST nvmf_rpc
00:11:22.244  ************************************
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:11:22.244  ************************************
00:11:22.244  START TEST nvmf_invalid
00:11:22.244  ************************************
00:11:22.244   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:11:22.244  * Looking for test storage...
00:11:22.244  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:22.244    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:22.244     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version
00:11:22.244     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-:
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-:
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<'
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:22.502     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:22.502    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:22.502  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.502  		--rc genhtml_branch_coverage=1
00:11:22.502  		--rc genhtml_function_coverage=1
00:11:22.502  		--rc genhtml_legend=1
00:11:22.503  		--rc geninfo_all_blocks=1
00:11:22.503  		--rc geninfo_unexecuted_blocks=1
00:11:22.503  		
00:11:22.503  		'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:22.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.503  		--rc genhtml_branch_coverage=1
00:11:22.503  		--rc genhtml_function_coverage=1
00:11:22.503  		--rc genhtml_legend=1
00:11:22.503  		--rc geninfo_all_blocks=1
00:11:22.503  		--rc geninfo_unexecuted_blocks=1
00:11:22.503  		
00:11:22.503  		'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:22.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.503  		--rc genhtml_branch_coverage=1
00:11:22.503  		--rc genhtml_function_coverage=1
00:11:22.503  		--rc genhtml_legend=1
00:11:22.503  		--rc geninfo_all_blocks=1
00:11:22.503  		--rc geninfo_unexecuted_blocks=1
00:11:22.503  		
00:11:22.503  		'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:22.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.503  		--rc genhtml_branch_coverage=1
00:11:22.503  		--rc genhtml_function_coverage=1
00:11:22.503  		--rc genhtml_legend=1
00:11:22.503  		--rc geninfo_all_blocks=1
00:11:22.503  		--rc geninfo_unexecuted_blocks=1
00:11:22.503  		
00:11:22.503  		'
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:22.503     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:22.503     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:22.503     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob
00:11:22.503     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:22.503     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:22.503     04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:22.503      04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:22.503      04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:22.503      04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:22.503      04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH
00:11:22.503      04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:22.503  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:22.503    04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable
00:11:22.503   04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=()
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=()
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=()
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=()
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=()
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:11:25.032  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:11:25.032  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:25.032   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:11:25.033  Found net devices under 0000:0a:00.0: cvl_0_0
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:11:25.033  Found net devices under 0000:0a:00.1: cvl_0_1
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:25.033  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:25.033  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms
00:11:25.033  
00:11:25.033  --- 10.0.0.2 ping statistics ---
00:11:25.033  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:25.033  rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:25.033  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:25.033  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms
00:11:25.033  
00:11:25.033  --- 10.0.0.1 ping statistics ---
00:11:25.033  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:25.033  rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=190290
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 190290
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 190290 ']'
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:25.033  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:25.033   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:11:25.033  [2024-12-09 04:01:53.357192] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:11:25.033  [2024-12-09 04:01:53.357296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:25.033  [2024-12-09 04:01:53.430489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:25.033  [2024-12-09 04:01:53.487701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:25.033  [2024-12-09 04:01:53.487760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:25.033  [2024-12-09 04:01:53.487790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:25.033  [2024-12-09 04:01:53.487801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:25.033  [2024-12-09 04:01:53.487810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:25.033  [2024-12-09 04:01:53.489392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:25.033  [2024-12-09 04:01:53.489421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:25.033  [2024-12-09 04:01:53.489450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:25.033  [2024-12-09 04:01:53.489454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:25.291   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:25.291   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0
00:11:25.291   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:25.291   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:25.291   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:11:25.291   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:25.291   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:11:25.291    04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22530
00:11:25.547  [2024-12-09 04:01:53.929365] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar
00:11:25.547   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request:
00:11:25.547  {
00:11:25.547    "nqn": "nqn.2016-06.io.spdk:cnode22530",
00:11:25.547    "tgt_name": "foobar",
00:11:25.547    "method": "nvmf_create_subsystem",
00:11:25.547    "req_id": 1
00:11:25.547  }
00:11:25.547  Got JSON-RPC error response
00:11:25.547  response:
00:11:25.547  {
00:11:25.547    "code": -32603,
00:11:25.547    "message": "Unable to find target foobar"
00:11:25.547  }'
00:11:25.547   04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request:
00:11:25.547  {
00:11:25.547    "nqn": "nqn.2016-06.io.spdk:cnode22530",
00:11:25.547    "tgt_name": "foobar",
00:11:25.547    "method": "nvmf_create_subsystem",
00:11:25.547    "req_id": 1
00:11:25.547  }
00:11:25.547  Got JSON-RPC error response
00:11:25.547  response:
00:11:25.547  {
00:11:25.547    "code": -32603,
00:11:25.547    "message": "Unable to find target foobar"
00:11:25.547  } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]]
00:11:25.547     04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f'
00:11:25.547    04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7522
00:11:25.805  [2024-12-09 04:01:54.198302] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7522: invalid serial number 'SPDKISFASTANDAWESOME'
00:11:25.805   04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request:
00:11:25.805  {
00:11:25.805    "nqn": "nqn.2016-06.io.spdk:cnode7522",
00:11:25.805    "serial_number": "SPDKISFASTANDAWESOME\u001f",
00:11:25.805    "method": "nvmf_create_subsystem",
00:11:25.805    "req_id": 1
00:11:25.805  }
00:11:25.805  Got JSON-RPC error response
00:11:25.805  response:
00:11:25.805  {
00:11:25.805    "code": -32602,
00:11:25.805    "message": "Invalid SN SPDKISFASTANDAWESOME\u001f"
00:11:25.805  }'
00:11:25.805   04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request:
00:11:25.805  {
00:11:25.805    "nqn": "nqn.2016-06.io.spdk:cnode7522",
00:11:25.805    "serial_number": "SPDKISFASTANDAWESOME\u001f",
00:11:25.805    "method": "nvmf_create_subsystem",
00:11:25.805    "req_id": 1
00:11:25.805  }
00:11:25.805  Got JSON-RPC error response
00:11:25.805  response:
00:11:25.805  {
00:11:25.805    "code": -32602,
00:11:25.805    "message": "Invalid SN SPDKISFASTANDAWESOME\u001f"
00:11:25.805  } == *\I\n\v\a\l\i\d\ \S\N* ]]
00:11:25.805     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f'
00:11:25.805    04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31348
00:11:26.063  [2024-12-09 04:01:54.523358] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31348: invalid model number 'SPDK_Controller'
00:11:26.063   04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request:
00:11:26.063  {
00:11:26.063    "nqn": "nqn.2016-06.io.spdk:cnode31348",
00:11:26.063    "model_number": "SPDK_Controller\u001f",
00:11:26.063    "method": "nvmf_create_subsystem",
00:11:26.063    "req_id": 1
00:11:26.063  }
00:11:26.063  Got JSON-RPC error response
00:11:26.063  response:
00:11:26.063  {
00:11:26.063    "code": -32602,
00:11:26.063    "message": "Invalid MN SPDK_Controller\u001f"
00:11:26.063  }'
00:11:26.063   04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request:
00:11:26.063  {
00:11:26.063    "nqn": "nqn.2016-06.io.spdk:cnode31348",
00:11:26.063    "model_number": "SPDK_Controller\u001f",
00:11:26.063    "method": "nvmf_create_subsystem",
00:11:26.063    "req_id": 1
00:11:26.063  }
00:11:26.063  Got JSON-RPC error response
00:11:26.063  response:
00:11:26.063  {
00:11:26.063    "code": -32602,
00:11:26.063    "message": "Invalid MN SPDK_Controller\u001f"
00:11:26.063  } == *\I\n\v\a\l\i\d\ \M\N* ]]
00:11:26.063     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21
00:11:26.063     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=%
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=%
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47'
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.064     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.064       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100
00:11:26.064      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64'
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.065       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111
00:11:26.065      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f'
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.065       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86
00:11:26.065      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56'
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]]
00:11:26.065     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'h7~qP>``uPl%%h9\8GdoV'
00:11:26.065    04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'h7~qP>``uPl%%h9\8GdoV' nqn.2016-06.io.spdk:cnode2844
00:11:26.322  [2024-12-09 04:01:54.884505] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2844: invalid serial number 'h7~qP>``uPl%%h9\8GdoV'
00:11:26.581   04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request:
00:11:26.581  {
00:11:26.581    "nqn": "nqn.2016-06.io.spdk:cnode2844",
00:11:26.581    "serial_number": "h7~qP>``uPl%%h9\\8GdoV",
00:11:26.581    "method": "nvmf_create_subsystem",
00:11:26.581    "req_id": 1
00:11:26.581  }
00:11:26.581  Got JSON-RPC error response
00:11:26.581  response:
00:11:26.581  {
00:11:26.581    "code": -32602,
00:11:26.581    "message": "Invalid SN h7~qP>``uPl%%h9\\8GdoV"
00:11:26.581  }'
00:11:26.581   04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request:
00:11:26.581  {
00:11:26.581    "nqn": "nqn.2016-06.io.spdk:cnode2844",
00:11:26.581    "serial_number": "h7~qP>``uPl%%h9\\8GdoV",
00:11:26.581    "method": "nvmf_create_subsystem",
00:11:26.581    "req_id": 1
00:11:26.581  }
00:11:26.581  Got JSON-RPC error response
00:11:26.581  response:
00:11:26.581  {
00:11:26.581    "code": -32602,
00:11:26.581    "message": "Invalid SN h7~qP>``uPl%%h9\\8GdoV"
00:11:26.581  } == *\I\n\v\a\l\i\d\ \S\N* ]]
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.581       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117
00:11:26.581      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75'
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.581       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123
00:11:26.581      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b'
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{'
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.581       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68
00:11:26.581      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44'
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.581       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95
00:11:26.581      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f'
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.581       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114
00:11:26.581      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72'
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.581       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69
00:11:26.581      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45'
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.581     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.581       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118
00:11:26.581      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' '
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=%
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=%
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100
00:11:26.582      04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64'
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40
00:11:26.582      04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28'
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='('
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119
00:11:26.582      04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77'
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58
00:11:26.582      04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a'
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=:
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119
00:11:26.582      04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77'
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68
00:11:26.582      04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44'
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.582     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.582       04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32
00:11:26.583      04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20'
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' '
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.583       04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41
00:11:26.583      04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29'
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')'
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]]
00:11:26.583     04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'u{D_rEvE>& 011SQ7~A`%}^5Sg%TTz<1d(w:wD )'
00:11:26.583    04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'u{D_rEvE>& 011SQ7~A`%}^5Sg%TTz<1d(w:wD )' nqn.2016-06.io.spdk:cnode19814
00:11:26.839  [2024-12-09 04:01:55.269764] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19814: invalid model number 'u{D_rEvE>& 011SQ7~A`%}^5Sg%TTz<1d(w:wD )'
00:11:26.839   04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request:
00:11:26.839  {
00:11:26.839    "nqn": "nqn.2016-06.io.spdk:cnode19814",
00:11:26.839    "model_number": "u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )",
00:11:26.839    "method": "nvmf_create_subsystem",
00:11:26.839    "req_id": 1
00:11:26.839  }
00:11:26.839  Got JSON-RPC error response
00:11:26.839  response:
00:11:26.839  {
00:11:26.839    "code": -32602,
00:11:26.839    "message": "Invalid MN u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )"
00:11:26.839  }'
00:11:26.839   04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request:
00:11:26.839  {
00:11:26.839    "nqn": "nqn.2016-06.io.spdk:cnode19814",
00:11:26.839    "model_number": "u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )",
00:11:26.839    "method": "nvmf_create_subsystem",
00:11:26.839    "req_id": 1
00:11:26.839  }
00:11:26.839  Got JSON-RPC error response
00:11:26.839  response:
00:11:26.839  {
00:11:26.839    "code": -32602,
00:11:26.839    "message": "Invalid MN u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )"
00:11:26.839  } == *\I\n\v\a\l\i\d\ \M\N* ]]
00:11:26.839   04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp
00:11:27.095  [2024-12-09 04:01:55.538727] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:27.095   04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a
00:11:27.351   04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]]
00:11:27.351    04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo ''
00:11:27.351    04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1
00:11:27.351   04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=
00:11:27.351    04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421
00:11:27.608  [2024-12-09 04:01:56.092534] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2
00:11:27.608   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request:
00:11:27.608  {
00:11:27.608    "nqn": "nqn.2016-06.io.spdk:cnode",
00:11:27.608    "listen_address": {
00:11:27.608      "trtype": "tcp",
00:11:27.608      "traddr": "",
00:11:27.608      "trsvcid": "4421"
00:11:27.608    },
00:11:27.608    "method": "nvmf_subsystem_remove_listener",
00:11:27.608    "req_id": 1
00:11:27.608  }
00:11:27.608  Got JSON-RPC error response
00:11:27.608  response:
00:11:27.608  {
00:11:27.608    "code": -32602,
00:11:27.608    "message": "Invalid parameters"
00:11:27.608  }'
00:11:27.608   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request:
00:11:27.608  {
00:11:27.608    "nqn": "nqn.2016-06.io.spdk:cnode",
00:11:27.608    "listen_address": {
00:11:27.608      "trtype": "tcp",
00:11:27.608      "traddr": "",
00:11:27.608      "trsvcid": "4421"
00:11:27.608    },
00:11:27.608    "method": "nvmf_subsystem_remove_listener",
00:11:27.608    "req_id": 1
00:11:27.608  }
00:11:27.608  Got JSON-RPC error response
00:11:27.608  response:
00:11:27.608  {
00:11:27.608    "code": -32602,
00:11:27.608    "message": "Invalid parameters"
00:11:27.608  } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]]
00:11:27.608    04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31283 -i 0
00:11:27.866  [2024-12-09 04:01:56.373431] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31283: invalid cntlid range [0-65519]
00:11:27.866   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request:
00:11:27.866  {
00:11:27.866    "nqn": "nqn.2016-06.io.spdk:cnode31283",
00:11:27.866    "min_cntlid": 0,
00:11:27.866    "method": "nvmf_create_subsystem",
00:11:27.866    "req_id": 1
00:11:27.866  }
00:11:27.866  Got JSON-RPC error response
00:11:27.866  response:
00:11:27.866  {
00:11:27.866    "code": -32602,
00:11:27.866    "message": "Invalid cntlid range [0-65519]"
00:11:27.866  }'
00:11:27.866   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request:
00:11:27.866  {
00:11:27.866    "nqn": "nqn.2016-06.io.spdk:cnode31283",
00:11:27.866    "min_cntlid": 0,
00:11:27.866    "method": "nvmf_create_subsystem",
00:11:27.866    "req_id": 1
00:11:27.866  }
00:11:27.866  Got JSON-RPC error response
00:11:27.866  response:
00:11:27.866  {
00:11:27.866    "code": -32602,
00:11:27.866    "message": "Invalid cntlid range [0-65519]"
00:11:27.866  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:27.866    04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8699 -i 65520
00:11:28.123  [2024-12-09 04:01:56.646405] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8699: invalid cntlid range [65520-65519]
00:11:28.123   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request:
00:11:28.123  {
00:11:28.123    "nqn": "nqn.2016-06.io.spdk:cnode8699",
00:11:28.123    "min_cntlid": 65520,
00:11:28.123    "method": "nvmf_create_subsystem",
00:11:28.123    "req_id": 1
00:11:28.123  }
00:11:28.123  Got JSON-RPC error response
00:11:28.123  response:
00:11:28.123  {
00:11:28.123    "code": -32602,
00:11:28.123    "message": "Invalid cntlid range [65520-65519]"
00:11:28.123  }'
00:11:28.123   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request:
00:11:28.123  {
00:11:28.123    "nqn": "nqn.2016-06.io.spdk:cnode8699",
00:11:28.123    "min_cntlid": 65520,
00:11:28.123    "method": "nvmf_create_subsystem",
00:11:28.123    "req_id": 1
00:11:28.123  }
00:11:28.123  Got JSON-RPC error response
00:11:28.123  response:
00:11:28.123  {
00:11:28.123    "code": -32602,
00:11:28.123    "message": "Invalid cntlid range [65520-65519]"
00:11:28.123  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:28.123    04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26540 -I 0
00:11:28.380  [2024-12-09 04:01:56.923315] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26540: invalid cntlid range [1-0]
00:11:28.380   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request:
00:11:28.380  {
00:11:28.380    "nqn": "nqn.2016-06.io.spdk:cnode26540",
00:11:28.380    "max_cntlid": 0,
00:11:28.380    "method": "nvmf_create_subsystem",
00:11:28.380    "req_id": 1
00:11:28.380  }
00:11:28.380  Got JSON-RPC error response
00:11:28.380  response:
00:11:28.380  {
00:11:28.380    "code": -32602,
00:11:28.380    "message": "Invalid cntlid range [1-0]"
00:11:28.380  }'
00:11:28.380   04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request:
00:11:28.380  {
00:11:28.380    "nqn": "nqn.2016-06.io.spdk:cnode26540",
00:11:28.380    "max_cntlid": 0,
00:11:28.380    "method": "nvmf_create_subsystem",
00:11:28.380    "req_id": 1
00:11:28.380  }
00:11:28.380  Got JSON-RPC error response
00:11:28.380  response:
00:11:28.380  {
00:11:28.380    "code": -32602,
00:11:28.380    "message": "Invalid cntlid range [1-0]"
00:11:28.380  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:28.380    04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21269 -I 65520
00:11:28.638  [2024-12-09 04:01:57.184160] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21269: invalid cntlid range [1-65520]
00:11:28.638   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request:
00:11:28.638  {
00:11:28.638    "nqn": "nqn.2016-06.io.spdk:cnode21269",
00:11:28.638    "max_cntlid": 65520,
00:11:28.638    "method": "nvmf_create_subsystem",
00:11:28.638    "req_id": 1
00:11:28.638  }
00:11:28.638  Got JSON-RPC error response
00:11:28.638  response:
00:11:28.638  {
00:11:28.638    "code": -32602,
00:11:28.638    "message": "Invalid cntlid range [1-65520]"
00:11:28.638  }'
00:11:28.638   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request:
00:11:28.638  {
00:11:28.638    "nqn": "nqn.2016-06.io.spdk:cnode21269",
00:11:28.638    "max_cntlid": 65520,
00:11:28.638    "method": "nvmf_create_subsystem",
00:11:28.638    "req_id": 1
00:11:28.638  }
00:11:28.638  Got JSON-RPC error response
00:11:28.638  response:
00:11:28.638  {
00:11:28.638    "code": -32602,
00:11:28.638    "message": "Invalid cntlid range [1-65520]"
00:11:28.638  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:28.638    04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5812 -i 6 -I 5
00:11:28.895  [2024-12-09 04:01:57.461126] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5812: invalid cntlid range [6-5]
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request:
00:11:29.152  {
00:11:29.152    "nqn": "nqn.2016-06.io.spdk:cnode5812",
00:11:29.152    "min_cntlid": 6,
00:11:29.152    "max_cntlid": 5,
00:11:29.152    "method": "nvmf_create_subsystem",
00:11:29.152    "req_id": 1
00:11:29.152  }
00:11:29.152  Got JSON-RPC error response
00:11:29.152  response:
00:11:29.152  {
00:11:29.152    "code": -32602,
00:11:29.152    "message": "Invalid cntlid range [6-5]"
00:11:29.152  }'
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request:
00:11:29.152  {
00:11:29.152    "nqn": "nqn.2016-06.io.spdk:cnode5812",
00:11:29.152    "min_cntlid": 6,
00:11:29.152    "max_cntlid": 5,
00:11:29.152    "method": "nvmf_create_subsystem",
00:11:29.152    "req_id": 1
00:11:29.152  }
00:11:29.152  Got JSON-RPC error response
00:11:29.152  response:
00:11:29.152  {
00:11:29.152    "code": -32602,
00:11:29.152    "message": "Invalid cntlid range [6-5]"
00:11:29.152  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:29.152    04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request:
00:11:29.152  {
00:11:29.152    "name": "foobar",
00:11:29.152    "method": "nvmf_delete_target",
00:11:29.152    "req_id": 1
00:11:29.152  }
00:11:29.152  Got JSON-RPC error response
00:11:29.152  response:
00:11:29.152  {
00:11:29.152    "code": -32602,
00:11:29.152    "message": "The specified target doesn'\''t exist, cannot delete it."
00:11:29.152  }'
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request:
00:11:29.152  {
00:11:29.152    "name": "foobar",
00:11:29.152    "method": "nvmf_delete_target",
00:11:29.152    "req_id": 1
00:11:29.152  }
00:11:29.152  Got JSON-RPC error response
00:11:29.152  response:
00:11:29.152  {
00:11:29.152    "code": -32602,
00:11:29.152    "message": "The specified target doesn't exist, cannot delete it."
00:11:29.152  } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]]
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:29.152  rmmod nvme_tcp
00:11:29.152  rmmod nvme_fabrics
00:11:29.152  rmmod nvme_keyring
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 190290 ']'
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 190290
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 190290 ']'
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 190290
00:11:29.152    04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:29.152    04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 190290
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 190290'
00:11:29.152  killing process with pid 190290
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 190290
00:11:29.152   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 190290
00:11:29.409   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:29.409   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:29.409   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:29.409   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr
00:11:29.409   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save
00:11:29.409   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:29.410   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore
00:11:29.410   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:29.410   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns
00:11:29.410   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:29.410   04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:29.410    04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:31.948   04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:31.948  
00:11:31.948  real	0m9.200s
00:11:31.948  user	0m21.819s
00:11:31.948  sys	0m2.596s
00:11:31.948   04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:31.948   04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:11:31.948  ************************************
00:11:31.948  END TEST nvmf_invalid
00:11:31.948  ************************************
00:11:31.948   04:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:11:31.948   04:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:31.948   04:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:31.948   04:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:11:31.948  ************************************
00:11:31.948  START TEST nvmf_connect_stress
00:11:31.948  ************************************
00:11:31.948   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:11:31.948  * Looking for test storage...
00:11:31.948  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-:
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-:
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<'
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:31.948  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:31.948  		--rc genhtml_branch_coverage=1
00:11:31.948  		--rc genhtml_function_coverage=1
00:11:31.948  		--rc genhtml_legend=1
00:11:31.948  		--rc geninfo_all_blocks=1
00:11:31.948  		--rc geninfo_unexecuted_blocks=1
00:11:31.948  		
00:11:31.948  		'
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:31.948  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:31.948  		--rc genhtml_branch_coverage=1
00:11:31.948  		--rc genhtml_function_coverage=1
00:11:31.948  		--rc genhtml_legend=1
00:11:31.948  		--rc geninfo_all_blocks=1
00:11:31.948  		--rc geninfo_unexecuted_blocks=1
00:11:31.948  		
00:11:31.948  		'
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:31.948  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:31.948  		--rc genhtml_branch_coverage=1
00:11:31.948  		--rc genhtml_function_coverage=1
00:11:31.948  		--rc genhtml_legend=1
00:11:31.948  		--rc geninfo_all_blocks=1
00:11:31.948  		--rc geninfo_unexecuted_blocks=1
00:11:31.948  		
00:11:31.948  		'
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:31.948  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:31.948  		--rc genhtml_branch_coverage=1
00:11:31.948  		--rc genhtml_function_coverage=1
00:11:31.948  		--rc genhtml_legend=1
00:11:31.948  		--rc geninfo_all_blocks=1
00:11:31.948  		--rc geninfo_unexecuted_blocks=1
00:11:31.948  		
00:11:31.948  		'
00:11:31.948   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:31.948    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:31.948     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:31.949     04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:31.949      04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:31.949      04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:31.949      04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:31.949      04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH
00:11:31.949      04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:31.949  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:31.949    04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:11:31.949   04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=()
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=()
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=()
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=()
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:33.854   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:11:33.855  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:11:33.855  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:11:33.855  Found net devices under 0000:0a:00.0: cvl_0_0
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:11:33.855  Found net devices under 0000:0a:00.1: cvl_0_1
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:33.855   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:34.143  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:34.143  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms
00:11:34.143  
00:11:34.143  --- 10.0.0.2 ping statistics ---
00:11:34.143  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:34.143  rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:34.143  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:34.143  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms
00:11:34.143  
00:11:34.143  --- 10.0.0.1 ping statistics ---
00:11:34.143  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:34.143  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=193049
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 193049
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 193049 ']'
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:34.143  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:34.143   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.143  [2024-12-09 04:02:02.526250] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:11:34.143  [2024-12-09 04:02:02.526340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:34.143  [2024-12-09 04:02:02.595969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:34.143  [2024-12-09 04:02:02.650242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:34.143  [2024-12-09 04:02:02.650306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:34.143  [2024-12-09 04:02:02.650334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:34.143  [2024-12-09 04:02:02.650345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:34.143  [2024-12-09 04:02:02.650354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:34.143  [2024-12-09 04:02:02.651856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:34.143  [2024-12-09 04:02:02.651918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:34.143  [2024-12-09 04:02:02.651922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.403  [2024-12-09 04:02:02.798410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.403  [2024-12-09 04:02:02.815743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.403  NULL1
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=193074
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:11:34.403    04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:34.403   04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:34.660   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:34.660   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:34.660   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:34.660   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:34.660   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:35.224   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:35.224   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:35.224   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:35.224   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:35.224   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:35.481   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:35.481   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:35.481   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:35.481   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:35.481   04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:35.738   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:35.738   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:35.738   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:35.738   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:35.738   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:35.996   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:35.996   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:35.996   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:35.996   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:35.996   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:36.293   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.293   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:36.293   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:36.293   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.293   04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:36.551   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.551   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:36.551   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:36.551   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.551   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:37.115   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.115   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:37.115   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:37.115   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.115   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:37.373   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.373   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:37.373   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:37.373   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.373   04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:37.630   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.630   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:37.630   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:37.630   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.630   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:37.888   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.888   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:37.888   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:37.888   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.888   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:38.455   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:38.455   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:38.455   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:38.455   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:38.455   04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:38.713   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:38.713   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:38.713   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:38.713   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:38.713   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:38.971   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:38.971   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:38.971   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:38.971   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:38.971   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:39.230   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:39.230   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:39.230   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:39.230   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:39.230   04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:39.489   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:39.489   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:39.489   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:39.489   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:39.489   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:40.055   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.055   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:40.055   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:40.055   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.055   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:40.314   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.314   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:40.314   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:40.314   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.314   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:40.572   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.572   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:40.572   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:40.572   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.572   04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:40.830   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.830   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:40.830   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:40.830   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.830   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:41.088   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.088   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:41.088   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:41.088   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.088   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:41.656   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.656   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:41.656   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:41.656   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.656   04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:41.914   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.914   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:41.914   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:41.914   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.914   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:42.172   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.172   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:42.172   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:42.172   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.172   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:42.430   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.430   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:42.430   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:42.430   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.430   04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:42.688   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.688   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:42.688   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:42.688   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.688   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:43.254   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:43.254   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:43.254   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:43.254   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:43.254   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:43.513   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:43.513   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:43.513   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:43.513   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:43.513   04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:43.772   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:43.772   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:43.772   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:43.772   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:43.772   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:44.031   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:44.031   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:44.031   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:44.031   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:44.031   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:44.289   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:44.289   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:44.289   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:11:44.289   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:44.289   04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:44.547  Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074
00:11:44.806  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (193074) - No such process
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 193074
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:44.806  rmmod nvme_tcp
00:11:44.806  rmmod nvme_fabrics
00:11:44.806  rmmod nvme_keyring
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 193049 ']'
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 193049
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 193049 ']'
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 193049
00:11:44.806    04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:44.806    04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193049
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193049'
00:11:44.806  killing process with pid 193049
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 193049
00:11:44.806   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 193049
00:11:45.066   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:45.066   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:45.067   04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:45.067    04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:46.977   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:46.977  
00:11:46.977  real	0m15.508s
00:11:46.977  user	0m39.896s
00:11:46.977  sys	0m4.711s
00:11:46.977   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:46.977   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:11:46.977  ************************************
00:11:46.977  END TEST nvmf_connect_stress
00:11:46.977  ************************************
00:11:46.977   04:02:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:11:46.977   04:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:46.977   04:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:46.977   04:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:11:47.236  ************************************
00:11:47.236  START TEST nvmf_fused_ordering
00:11:47.236  ************************************
00:11:47.236   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:11:47.236  * Looking for test storage...
00:11:47.236  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:47.236     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version
00:11:47.236     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-:
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-:
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<'
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in
00:11:47.236    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:47.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:47.237  		--rc genhtml_branch_coverage=1
00:11:47.237  		--rc genhtml_function_coverage=1
00:11:47.237  		--rc genhtml_legend=1
00:11:47.237  		--rc geninfo_all_blocks=1
00:11:47.237  		--rc geninfo_unexecuted_blocks=1
00:11:47.237  		
00:11:47.237  		'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:47.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:47.237  		--rc genhtml_branch_coverage=1
00:11:47.237  		--rc genhtml_function_coverage=1
00:11:47.237  		--rc genhtml_legend=1
00:11:47.237  		--rc geninfo_all_blocks=1
00:11:47.237  		--rc geninfo_unexecuted_blocks=1
00:11:47.237  		
00:11:47.237  		'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:47.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:47.237  		--rc genhtml_branch_coverage=1
00:11:47.237  		--rc genhtml_function_coverage=1
00:11:47.237  		--rc genhtml_legend=1
00:11:47.237  		--rc geninfo_all_blocks=1
00:11:47.237  		--rc geninfo_unexecuted_blocks=1
00:11:47.237  		
00:11:47.237  		'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:47.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:47.237  		--rc genhtml_branch_coverage=1
00:11:47.237  		--rc genhtml_function_coverage=1
00:11:47.237  		--rc genhtml_legend=1
00:11:47.237  		--rc geninfo_all_blocks=1
00:11:47.237  		--rc geninfo_unexecuted_blocks=1
00:11:47.237  		
00:11:47.237  		'
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:47.237     04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:47.237      04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:47.237      04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:47.237      04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:47.237      04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH
00:11:47.237      04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:47.237  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:47.237    04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable
00:11:47.237   04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:49.772   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:49.772   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=()
00:11:49.772   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=()
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=()
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=()
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=()
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:11:49.773  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:11:49.773  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:11:49.773  Found net devices under 0000:0a:00.0: cvl_0_0
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:11:49.773  Found net devices under 0000:0a:00.1: cvl_0_1
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:49.773   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:49.774   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:49.774   04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:49.774  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:49.774  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms
00:11:49.774  
00:11:49.774  --- 10.0.0.2 ping statistics ---
00:11:49.774  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:49.774  rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:49.774  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:49.774  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms
00:11:49.774  
00:11:49.774  --- 10.0.0.1 ping statistics ---
00:11:49.774  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:49.774  rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=196235
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 196235
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 196235 ']'
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:49.774  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:49.774   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:49.774  [2024-12-09 04:02:18.249966] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:11:49.774  [2024-12-09 04:02:18.250049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:49.774  [2024-12-09 04:02:18.323192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:50.033  [2024-12-09 04:02:18.383008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:50.033  [2024-12-09 04:02:18.383056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:50.033  [2024-12-09 04:02:18.383085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:50.033  [2024-12-09 04:02:18.383096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:50.033  [2024-12-09 04:02:18.383105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:50.033  [2024-12-09 04:02:18.383770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:50.033  [2024-12-09 04:02:18.520874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:50.033  [2024-12-09 04:02:18.537099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:50.033  NULL1
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:50.033   04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:11:50.033  [2024-12-09 04:02:18.580951] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:11:50.033  [2024-12-09 04:02:18.580985] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196371 ]
00:11:50.599  Attached to nqn.2016-06.io.spdk:cnode1
00:11:50.599    Namespace ID: 1 size: 1GB
00:11:50.599  fused_ordering(0)
00:11:50.599  fused_ordering(1)
00:11:50.599  fused_ordering(2)
00:11:50.599  fused_ordering(3)
00:11:50.599  fused_ordering(4)
00:11:50.599  fused_ordering(5)
00:11:50.599  fused_ordering(6)
00:11:50.599  fused_ordering(7)
00:11:50.599  fused_ordering(8)
00:11:50.599  fused_ordering(9)
00:11:50.599  fused_ordering(10)
00:11:50.599  fused_ordering(11)
00:11:50.599  fused_ordering(12)
00:11:50.599  fused_ordering(13)
00:11:50.599  fused_ordering(14)
00:11:50.599  fused_ordering(15)
00:11:50.599  fused_ordering(16)
00:11:50.600  fused_ordering(17)
00:11:50.600  fused_ordering(18)
00:11:50.600  fused_ordering(19)
00:11:50.600  fused_ordering(20)
00:11:50.600  fused_ordering(21)
00:11:50.600  fused_ordering(22)
00:11:50.600  fused_ordering(23)
00:11:50.600  fused_ordering(24)
00:11:50.600  fused_ordering(25)
00:11:50.600  fused_ordering(26)
00:11:50.600  fused_ordering(27)
00:11:50.600  fused_ordering(28)
00:11:50.600  fused_ordering(29)
00:11:50.600  fused_ordering(30)
00:11:50.600  fused_ordering(31)
00:11:50.600  fused_ordering(32)
00:11:50.600  fused_ordering(33)
00:11:50.600  fused_ordering(34)
00:11:50.600  fused_ordering(35)
00:11:50.600  fused_ordering(36)
00:11:50.600  fused_ordering(37)
00:11:50.600  fused_ordering(38)
00:11:50.600  fused_ordering(39)
00:11:50.600  fused_ordering(40)
00:11:50.600  fused_ordering(41)
00:11:50.600  fused_ordering(42)
00:11:50.600  fused_ordering(43)
00:11:50.600  fused_ordering(44)
00:11:50.600  fused_ordering(45)
00:11:50.600  fused_ordering(46)
00:11:50.600  fused_ordering(47)
00:11:50.600  fused_ordering(48)
00:11:50.600  fused_ordering(49)
00:11:50.600  fused_ordering(50)
00:11:50.600  fused_ordering(51)
00:11:50.600  fused_ordering(52)
00:11:50.600  fused_ordering(53)
00:11:50.600  fused_ordering(54)
00:11:50.600  fused_ordering(55)
00:11:50.600  fused_ordering(56)
00:11:50.600  fused_ordering(57)
00:11:50.600  fused_ordering(58)
00:11:50.600  fused_ordering(59)
00:11:50.600  fused_ordering(60)
00:11:50.600  fused_ordering(61)
00:11:50.600  fused_ordering(62)
00:11:50.600  fused_ordering(63)
00:11:50.600  fused_ordering(64)
00:11:50.600  fused_ordering(65)
00:11:50.600  fused_ordering(66)
00:11:50.600  fused_ordering(67)
00:11:50.600  fused_ordering(68)
00:11:50.600  fused_ordering(69)
00:11:50.600  fused_ordering(70)
00:11:50.600  fused_ordering(71)
00:11:50.600  fused_ordering(72)
00:11:50.600  fused_ordering(73)
00:11:50.600  fused_ordering(74)
00:11:50.600  fused_ordering(75)
00:11:50.600  fused_ordering(76)
00:11:50.600  fused_ordering(77)
00:11:50.600  fused_ordering(78)
00:11:50.600  fused_ordering(79)
00:11:50.600  fused_ordering(80)
00:11:50.600  fused_ordering(81)
00:11:50.600  fused_ordering(82)
00:11:50.600  fused_ordering(83)
00:11:50.600  fused_ordering(84)
00:11:50.600  fused_ordering(85)
00:11:50.600  fused_ordering(86)
00:11:50.600  fused_ordering(87)
00:11:50.600  fused_ordering(88)
00:11:50.600  fused_ordering(89)
00:11:50.600  fused_ordering(90)
00:11:50.600  fused_ordering(91)
00:11:50.600  fused_ordering(92)
00:11:50.600  fused_ordering(93)
00:11:50.600  fused_ordering(94)
00:11:50.600  fused_ordering(95)
00:11:50.600  fused_ordering(96)
00:11:50.600  fused_ordering(97)
00:11:50.600  fused_ordering(98)
00:11:50.600  fused_ordering(99)
00:11:50.600  fused_ordering(100)
00:11:50.600  fused_ordering(101)
00:11:50.600  fused_ordering(102)
00:11:50.600  fused_ordering(103)
00:11:50.600  fused_ordering(104)
00:11:50.600  fused_ordering(105)
00:11:50.600  fused_ordering(106)
00:11:50.600  fused_ordering(107)
00:11:50.600  fused_ordering(108)
00:11:50.600  fused_ordering(109)
00:11:50.600  fused_ordering(110)
00:11:50.600  fused_ordering(111)
00:11:50.600  fused_ordering(112)
00:11:50.600  fused_ordering(113)
00:11:50.600  fused_ordering(114)
00:11:50.600  fused_ordering(115)
00:11:50.600  fused_ordering(116)
00:11:50.600  fused_ordering(117)
00:11:50.600  fused_ordering(118)
00:11:50.600  fused_ordering(119)
00:11:50.600  fused_ordering(120)
00:11:50.600  fused_ordering(121)
00:11:50.600  fused_ordering(122)
00:11:50.600  fused_ordering(123)
00:11:50.600  fused_ordering(124)
00:11:50.600  fused_ordering(125)
00:11:50.600  fused_ordering(126)
00:11:50.600  fused_ordering(127)
00:11:50.600  fused_ordering(128)
00:11:50.600  fused_ordering(129)
00:11:50.600  fused_ordering(130)
00:11:50.600  fused_ordering(131)
00:11:50.600  fused_ordering(132)
00:11:50.600  fused_ordering(133)
00:11:50.600  fused_ordering(134)
00:11:50.600  fused_ordering(135)
00:11:50.600  fused_ordering(136)
00:11:50.600  fused_ordering(137)
00:11:50.600  fused_ordering(138)
00:11:50.600  fused_ordering(139)
00:11:50.600  fused_ordering(140)
00:11:50.600  fused_ordering(141)
00:11:50.600  fused_ordering(142)
00:11:50.600  fused_ordering(143)
00:11:50.600  fused_ordering(144)
00:11:50.600  fused_ordering(145)
00:11:50.600  fused_ordering(146)
00:11:50.600  fused_ordering(147)
00:11:50.600  fused_ordering(148)
00:11:50.600  fused_ordering(149)
00:11:50.600  fused_ordering(150)
00:11:50.600  fused_ordering(151)
00:11:50.600  fused_ordering(152)
00:11:50.600  fused_ordering(153)
00:11:50.600  fused_ordering(154)
00:11:50.600  fused_ordering(155)
00:11:50.600  fused_ordering(156)
00:11:50.600  fused_ordering(157)
00:11:50.600  fused_ordering(158)
00:11:50.600  fused_ordering(159)
00:11:50.600  fused_ordering(160)
00:11:50.600  fused_ordering(161)
00:11:50.600  fused_ordering(162)
00:11:50.600  fused_ordering(163)
00:11:50.600  fused_ordering(164)
00:11:50.600  fused_ordering(165)
00:11:50.600  fused_ordering(166)
00:11:50.600  fused_ordering(167)
00:11:50.600  fused_ordering(168)
00:11:50.600  fused_ordering(169)
00:11:50.600  fused_ordering(170)
00:11:50.600  fused_ordering(171)
00:11:50.600  fused_ordering(172)
00:11:50.600  fused_ordering(173)
00:11:50.600  fused_ordering(174)
00:11:50.600  fused_ordering(175)
00:11:50.600  fused_ordering(176)
00:11:50.600  fused_ordering(177)
00:11:50.600  fused_ordering(178)
00:11:50.600  fused_ordering(179)
00:11:50.600  fused_ordering(180)
00:11:50.600  fused_ordering(181)
00:11:50.600  fused_ordering(182)
00:11:50.600  fused_ordering(183)
00:11:50.600  fused_ordering(184)
00:11:50.600  fused_ordering(185)
00:11:50.600  fused_ordering(186)
00:11:50.600  fused_ordering(187)
00:11:50.600  fused_ordering(188)
00:11:50.600  fused_ordering(189)
00:11:50.600  fused_ordering(190)
00:11:50.600  fused_ordering(191)
00:11:50.600  fused_ordering(192)
00:11:50.600  fused_ordering(193)
00:11:50.600  fused_ordering(194)
00:11:50.600  fused_ordering(195)
00:11:50.600  fused_ordering(196)
00:11:50.600  fused_ordering(197)
00:11:50.600  fused_ordering(198)
00:11:50.600  fused_ordering(199)
00:11:50.600  fused_ordering(200)
00:11:50.600  fused_ordering(201)
00:11:50.600  fused_ordering(202)
00:11:50.600  fused_ordering(203)
00:11:50.600  fused_ordering(204)
00:11:50.600  fused_ordering(205)
00:11:50.859  fused_ordering(206)
00:11:50.859  fused_ordering(207)
00:11:50.859  fused_ordering(208)
00:11:50.859  fused_ordering(209)
00:11:50.859  fused_ordering(210)
00:11:50.859  fused_ordering(211)
00:11:50.859  fused_ordering(212)
00:11:50.859  fused_ordering(213)
00:11:50.859  fused_ordering(214)
00:11:50.859  fused_ordering(215)
00:11:50.859  fused_ordering(216)
00:11:50.859  fused_ordering(217)
00:11:50.859  fused_ordering(218)
00:11:50.859  fused_ordering(219)
00:11:50.859  fused_ordering(220)
00:11:50.859  fused_ordering(221)
00:11:50.859  fused_ordering(222)
00:11:50.859  fused_ordering(223)
00:11:50.859  fused_ordering(224)
00:11:50.859  fused_ordering(225)
00:11:50.859  fused_ordering(226)
00:11:50.859  fused_ordering(227)
00:11:50.859  fused_ordering(228)
00:11:50.859  fused_ordering(229)
00:11:50.859  fused_ordering(230)
00:11:50.859  fused_ordering(231)
00:11:50.859  fused_ordering(232)
00:11:50.859  fused_ordering(233)
00:11:50.859  fused_ordering(234)
00:11:50.859  fused_ordering(235)
00:11:50.859  fused_ordering(236)
00:11:50.859  fused_ordering(237)
00:11:50.859  fused_ordering(238)
00:11:50.859  fused_ordering(239)
00:11:50.859  fused_ordering(240)
00:11:50.859  fused_ordering(241)
00:11:50.859  fused_ordering(242)
00:11:50.859  fused_ordering(243)
00:11:50.859  fused_ordering(244)
00:11:50.859  fused_ordering(245)
00:11:50.859  fused_ordering(246)
00:11:50.859  fused_ordering(247)
00:11:50.859  fused_ordering(248)
00:11:50.859  fused_ordering(249)
00:11:50.859  fused_ordering(250)
00:11:50.859  fused_ordering(251)
00:11:50.859  fused_ordering(252)
00:11:50.859  fused_ordering(253)
00:11:50.859  fused_ordering(254)
00:11:50.859  fused_ordering(255)
00:11:50.859  fused_ordering(256)
00:11:50.859  fused_ordering(257)
00:11:50.859  fused_ordering(258)
00:11:50.859  fused_ordering(259)
00:11:50.859  fused_ordering(260)
00:11:50.859  fused_ordering(261)
00:11:50.859  fused_ordering(262)
00:11:50.859  fused_ordering(263)
00:11:50.859  fused_ordering(264)
00:11:50.859  fused_ordering(265)
00:11:50.859  fused_ordering(266)
00:11:50.859  fused_ordering(267)
00:11:50.859  fused_ordering(268)
00:11:50.859  fused_ordering(269)
00:11:50.859  fused_ordering(270)
00:11:50.859  fused_ordering(271)
00:11:50.859  fused_ordering(272)
00:11:50.859  fused_ordering(273)
00:11:50.859  fused_ordering(274)
00:11:50.859  fused_ordering(275)
00:11:50.859  fused_ordering(276)
00:11:50.859  fused_ordering(277)
00:11:50.859  fused_ordering(278)
00:11:50.859  fused_ordering(279)
00:11:50.859  fused_ordering(280)
00:11:50.859  fused_ordering(281)
00:11:50.859  fused_ordering(282)
00:11:50.859  fused_ordering(283)
00:11:50.859  fused_ordering(284)
00:11:50.859  fused_ordering(285)
00:11:50.859  fused_ordering(286)
00:11:50.859  fused_ordering(287)
00:11:50.859  fused_ordering(288)
00:11:50.859  fused_ordering(289)
00:11:50.859  fused_ordering(290)
00:11:50.859  fused_ordering(291)
00:11:50.859  fused_ordering(292)
00:11:50.859  fused_ordering(293)
00:11:50.859  fused_ordering(294)
00:11:50.859  fused_ordering(295)
00:11:50.859  fused_ordering(296)
00:11:50.859  fused_ordering(297)
00:11:50.859  fused_ordering(298)
00:11:50.859  fused_ordering(299)
00:11:50.859  fused_ordering(300)
00:11:50.859  fused_ordering(301)
00:11:50.859  fused_ordering(302)
00:11:50.859  fused_ordering(303)
00:11:50.859  fused_ordering(304)
00:11:50.859  fused_ordering(305)
00:11:50.859  fused_ordering(306)
00:11:50.859  fused_ordering(307)
00:11:50.859  fused_ordering(308)
00:11:50.859  fused_ordering(309)
00:11:50.859  fused_ordering(310)
00:11:50.859  fused_ordering(311)
00:11:50.859  fused_ordering(312)
00:11:50.859  fused_ordering(313)
00:11:50.859  fused_ordering(314)
00:11:50.859  fused_ordering(315)
00:11:50.859  fused_ordering(316)
00:11:50.859  fused_ordering(317)
00:11:50.859  fused_ordering(318)
00:11:50.859  fused_ordering(319)
00:11:50.859  fused_ordering(320)
00:11:50.859  fused_ordering(321)
00:11:50.859  fused_ordering(322)
00:11:50.859  fused_ordering(323)
00:11:50.859  fused_ordering(324)
00:11:50.859  fused_ordering(325)
00:11:50.859  fused_ordering(326)
00:11:50.859  fused_ordering(327)
00:11:50.859  fused_ordering(328)
00:11:50.859  fused_ordering(329)
00:11:50.859  fused_ordering(330)
00:11:50.859  fused_ordering(331)
00:11:50.859  fused_ordering(332)
00:11:50.859  fused_ordering(333)
00:11:50.859  fused_ordering(334)
00:11:50.859  fused_ordering(335)
00:11:50.859  fused_ordering(336)
00:11:50.859  fused_ordering(337)
00:11:50.859  fused_ordering(338)
00:11:50.859  fused_ordering(339)
00:11:50.859  fused_ordering(340)
00:11:50.859  fused_ordering(341)
00:11:50.859  fused_ordering(342)
00:11:50.859  fused_ordering(343)
00:11:50.859  fused_ordering(344)
00:11:50.859  fused_ordering(345)
00:11:50.859  fused_ordering(346)
00:11:50.859  fused_ordering(347)
00:11:50.859  fused_ordering(348)
00:11:50.859  fused_ordering(349)
00:11:50.859  fused_ordering(350)
00:11:50.859  fused_ordering(351)
00:11:50.859  fused_ordering(352)
00:11:50.859  fused_ordering(353)
00:11:50.859  fused_ordering(354)
00:11:50.859  fused_ordering(355)
00:11:50.859  fused_ordering(356)
00:11:50.859  fused_ordering(357)
00:11:50.859  fused_ordering(358)
00:11:50.859  fused_ordering(359)
00:11:50.859  fused_ordering(360)
00:11:50.859  fused_ordering(361)
00:11:50.859  fused_ordering(362)
00:11:50.859  fused_ordering(363)
00:11:50.859  fused_ordering(364)
00:11:50.859  fused_ordering(365)
00:11:50.859  fused_ordering(366)
00:11:50.859  fused_ordering(367)
00:11:50.859  fused_ordering(368)
00:11:50.859  fused_ordering(369)
00:11:50.859  fused_ordering(370)
00:11:50.859  fused_ordering(371)
00:11:50.859  fused_ordering(372)
00:11:50.859  fused_ordering(373)
00:11:50.859  fused_ordering(374)
00:11:50.859  fused_ordering(375)
00:11:50.859  fused_ordering(376)
00:11:50.859  fused_ordering(377)
00:11:50.859  fused_ordering(378)
00:11:50.859  fused_ordering(379)
00:11:50.859  fused_ordering(380)
00:11:50.860  fused_ordering(381)
00:11:50.860  fused_ordering(382)
00:11:50.860  fused_ordering(383)
00:11:50.860  fused_ordering(384)
00:11:50.860  fused_ordering(385)
00:11:50.860  fused_ordering(386)
00:11:50.860  fused_ordering(387)
00:11:50.860  fused_ordering(388)
00:11:50.860  fused_ordering(389)
00:11:50.860  fused_ordering(390)
00:11:50.860  fused_ordering(391)
00:11:50.860  fused_ordering(392)
00:11:50.860  fused_ordering(393)
00:11:50.860  fused_ordering(394)
00:11:50.860  fused_ordering(395)
00:11:50.860  fused_ordering(396)
00:11:50.860  fused_ordering(397)
00:11:50.860  fused_ordering(398)
00:11:50.860  fused_ordering(399)
00:11:50.860  fused_ordering(400)
00:11:50.860  fused_ordering(401)
00:11:50.860  fused_ordering(402)
00:11:50.860  fused_ordering(403)
00:11:50.860  fused_ordering(404)
00:11:50.860  fused_ordering(405)
00:11:50.860  fused_ordering(406)
00:11:50.860  fused_ordering(407)
00:11:50.860  fused_ordering(408)
00:11:50.860  fused_ordering(409)
00:11:50.860  fused_ordering(410)
00:11:51.118  fused_ordering(411)
00:11:51.118  fused_ordering(412)
00:11:51.118  fused_ordering(413)
00:11:51.118  fused_ordering(414)
00:11:51.118  fused_ordering(415)
00:11:51.118  fused_ordering(416)
00:11:51.118  fused_ordering(417)
00:11:51.118  fused_ordering(418)
00:11:51.118  fused_ordering(419)
00:11:51.118  fused_ordering(420)
00:11:51.118  fused_ordering(421)
00:11:51.118  fused_ordering(422)
00:11:51.118  fused_ordering(423)
00:11:51.118  fused_ordering(424)
00:11:51.118  fused_ordering(425)
00:11:51.118  fused_ordering(426)
00:11:51.118  fused_ordering(427)
00:11:51.118  fused_ordering(428)
00:11:51.118  fused_ordering(429)
00:11:51.118  fused_ordering(430)
00:11:51.118  fused_ordering(431)
00:11:51.118  fused_ordering(432)
00:11:51.118  fused_ordering(433)
00:11:51.118  fused_ordering(434)
00:11:51.118  fused_ordering(435)
00:11:51.118  fused_ordering(436)
00:11:51.118  fused_ordering(437)
00:11:51.118  fused_ordering(438)
00:11:51.118  fused_ordering(439)
00:11:51.118  fused_ordering(440)
00:11:51.118  fused_ordering(441)
00:11:51.118  fused_ordering(442)
00:11:51.118  fused_ordering(443)
00:11:51.118  fused_ordering(444)
00:11:51.118  fused_ordering(445)
00:11:51.118  fused_ordering(446)
00:11:51.118  fused_ordering(447)
00:11:51.118  fused_ordering(448)
00:11:51.118  fused_ordering(449)
00:11:51.118  fused_ordering(450)
00:11:51.118  fused_ordering(451)
00:11:51.118  fused_ordering(452)
00:11:51.118  fused_ordering(453)
00:11:51.118  fused_ordering(454)
00:11:51.118  fused_ordering(455)
00:11:51.118  fused_ordering(456)
00:11:51.118  fused_ordering(457)
00:11:51.118  fused_ordering(458)
00:11:51.118  fused_ordering(459)
00:11:51.118  fused_ordering(460)
00:11:51.118  fused_ordering(461)
00:11:51.118  fused_ordering(462)
00:11:51.118  fused_ordering(463)
00:11:51.118  fused_ordering(464)
00:11:51.118  fused_ordering(465)
00:11:51.118  fused_ordering(466)
00:11:51.118  fused_ordering(467)
00:11:51.118  fused_ordering(468)
00:11:51.118  fused_ordering(469)
00:11:51.118  fused_ordering(470)
00:11:51.118  fused_ordering(471)
00:11:51.118  fused_ordering(472)
00:11:51.118  fused_ordering(473)
00:11:51.118  fused_ordering(474)
00:11:51.118  fused_ordering(475)
00:11:51.118  fused_ordering(476)
00:11:51.118  fused_ordering(477)
00:11:51.118  fused_ordering(478)
00:11:51.118  fused_ordering(479)
00:11:51.118  fused_ordering(480)
00:11:51.118  fused_ordering(481)
00:11:51.118  fused_ordering(482)
00:11:51.118  fused_ordering(483)
00:11:51.118  fused_ordering(484)
00:11:51.118  fused_ordering(485)
00:11:51.118  fused_ordering(486)
00:11:51.118  fused_ordering(487)
00:11:51.118  fused_ordering(488)
00:11:51.118  fused_ordering(489)
00:11:51.118  fused_ordering(490)
00:11:51.118  fused_ordering(491)
00:11:51.118  fused_ordering(492)
00:11:51.118  fused_ordering(493)
00:11:51.118  fused_ordering(494)
00:11:51.118  fused_ordering(495)
00:11:51.118  fused_ordering(496)
00:11:51.118  fused_ordering(497)
00:11:51.118  fused_ordering(498)
00:11:51.118  fused_ordering(499)
00:11:51.118  fused_ordering(500)
00:11:51.118  fused_ordering(501)
00:11:51.118  fused_ordering(502)
00:11:51.118  fused_ordering(503)
00:11:51.118  fused_ordering(504)
00:11:51.118  fused_ordering(505)
00:11:51.118  fused_ordering(506)
00:11:51.118  fused_ordering(507)
00:11:51.118  fused_ordering(508)
00:11:51.118  fused_ordering(509)
00:11:51.118  fused_ordering(510)
00:11:51.118  fused_ordering(511)
00:11:51.118  fused_ordering(512)
00:11:51.118  fused_ordering(513)
00:11:51.118  fused_ordering(514)
00:11:51.118  fused_ordering(515)
00:11:51.118  fused_ordering(516)
00:11:51.118  fused_ordering(517)
00:11:51.118  fused_ordering(518)
00:11:51.118  fused_ordering(519)
00:11:51.118  fused_ordering(520)
00:11:51.118  fused_ordering(521)
00:11:51.118  fused_ordering(522)
00:11:51.118  fused_ordering(523)
00:11:51.118  fused_ordering(524)
00:11:51.118  fused_ordering(525)
00:11:51.118  fused_ordering(526)
00:11:51.118  fused_ordering(527)
00:11:51.118  fused_ordering(528)
00:11:51.118  fused_ordering(529)
00:11:51.118  fused_ordering(530)
00:11:51.118  fused_ordering(531)
00:11:51.118  fused_ordering(532)
00:11:51.118  fused_ordering(533)
00:11:51.118  fused_ordering(534)
00:11:51.118  fused_ordering(535)
00:11:51.118  fused_ordering(536)
00:11:51.118  fused_ordering(537)
00:11:51.118  fused_ordering(538)
00:11:51.118  fused_ordering(539)
00:11:51.118  fused_ordering(540)
00:11:51.118  fused_ordering(541)
00:11:51.118  fused_ordering(542)
00:11:51.118  fused_ordering(543)
00:11:51.118  fused_ordering(544)
00:11:51.118  fused_ordering(545)
00:11:51.118  fused_ordering(546)
00:11:51.118  fused_ordering(547)
00:11:51.118  fused_ordering(548)
00:11:51.118  fused_ordering(549)
00:11:51.118  fused_ordering(550)
00:11:51.118  fused_ordering(551)
00:11:51.118  fused_ordering(552)
00:11:51.118  fused_ordering(553)
00:11:51.118  fused_ordering(554)
00:11:51.118  fused_ordering(555)
00:11:51.118  fused_ordering(556)
00:11:51.118  fused_ordering(557)
00:11:51.118  fused_ordering(558)
00:11:51.118  fused_ordering(559)
00:11:51.118  fused_ordering(560)
00:11:51.118  fused_ordering(561)
00:11:51.118  fused_ordering(562)
00:11:51.118  fused_ordering(563)
00:11:51.118  fused_ordering(564)
00:11:51.118  fused_ordering(565)
00:11:51.118  fused_ordering(566)
00:11:51.118  fused_ordering(567)
00:11:51.118  fused_ordering(568)
00:11:51.118  fused_ordering(569)
00:11:51.118  fused_ordering(570)
00:11:51.118  fused_ordering(571)
00:11:51.118  fused_ordering(572)
00:11:51.118  fused_ordering(573)
00:11:51.118  fused_ordering(574)
00:11:51.118  fused_ordering(575)
00:11:51.118  fused_ordering(576)
00:11:51.118  fused_ordering(577)
00:11:51.118  fused_ordering(578)
00:11:51.118  fused_ordering(579)
00:11:51.118  fused_ordering(580)
00:11:51.118  fused_ordering(581)
00:11:51.118  fused_ordering(582)
00:11:51.118  fused_ordering(583)
00:11:51.118  fused_ordering(584)
00:11:51.118  fused_ordering(585)
00:11:51.118  fused_ordering(586)
00:11:51.118  fused_ordering(587)
00:11:51.119  fused_ordering(588)
00:11:51.119  fused_ordering(589)
00:11:51.119  fused_ordering(590)
00:11:51.119  fused_ordering(591)
00:11:51.119  fused_ordering(592)
00:11:51.119  fused_ordering(593)
00:11:51.119  fused_ordering(594)
00:11:51.119  fused_ordering(595)
00:11:51.119  fused_ordering(596)
00:11:51.119  fused_ordering(597)
00:11:51.119  fused_ordering(598)
00:11:51.119  fused_ordering(599)
00:11:51.119  fused_ordering(600)
00:11:51.119  fused_ordering(601)
00:11:51.119  fused_ordering(602)
00:11:51.119  fused_ordering(603)
00:11:51.119  fused_ordering(604)
00:11:51.119  fused_ordering(605)
00:11:51.119  fused_ordering(606)
00:11:51.119  fused_ordering(607)
00:11:51.119  fused_ordering(608)
00:11:51.119  fused_ordering(609)
00:11:51.119  fused_ordering(610)
00:11:51.119  fused_ordering(611)
00:11:51.119  fused_ordering(612)
00:11:51.119  fused_ordering(613)
00:11:51.119  fused_ordering(614)
00:11:51.119  fused_ordering(615)
00:11:51.682  fused_ordering(616)
00:11:51.683  fused_ordering(617)
00:11:51.683  fused_ordering(618)
00:11:51.683  fused_ordering(619)
00:11:51.683  fused_ordering(620)
00:11:51.683  fused_ordering(621)
00:11:51.683  fused_ordering(622)
00:11:51.683  fused_ordering(623)
00:11:51.683  fused_ordering(624)
00:11:51.683  fused_ordering(625)
00:11:51.683  fused_ordering(626)
00:11:51.683  fused_ordering(627)
00:11:51.683  fused_ordering(628)
00:11:51.683  fused_ordering(629)
00:11:51.683  fused_ordering(630)
00:11:51.683  fused_ordering(631)
00:11:51.683  fused_ordering(632)
00:11:51.683  fused_ordering(633)
00:11:51.683  fused_ordering(634)
00:11:51.683  fused_ordering(635)
00:11:51.683  fused_ordering(636)
00:11:51.683  fused_ordering(637)
00:11:51.683  fused_ordering(638)
00:11:51.683  fused_ordering(639)
00:11:51.683  fused_ordering(640)
00:11:51.683  fused_ordering(641)
00:11:51.683  fused_ordering(642)
00:11:51.683  fused_ordering(643)
00:11:51.683  fused_ordering(644)
00:11:51.683  fused_ordering(645)
00:11:51.683  fused_ordering(646)
00:11:51.683  fused_ordering(647)
00:11:51.683  fused_ordering(648)
00:11:51.683  fused_ordering(649)
00:11:51.683  fused_ordering(650)
00:11:51.683  fused_ordering(651)
00:11:51.683  fused_ordering(652)
00:11:51.683  fused_ordering(653)
00:11:51.683  fused_ordering(654)
00:11:51.683  fused_ordering(655)
00:11:51.683  fused_ordering(656)
00:11:51.683  fused_ordering(657)
00:11:51.683  fused_ordering(658)
00:11:51.683  fused_ordering(659)
00:11:51.683  fused_ordering(660)
00:11:51.683  fused_ordering(661)
00:11:51.683  fused_ordering(662)
00:11:51.683  fused_ordering(663)
00:11:51.683  fused_ordering(664)
00:11:51.683  fused_ordering(665)
00:11:51.683  fused_ordering(666)
00:11:51.683  fused_ordering(667)
00:11:51.683  fused_ordering(668)
00:11:51.683  fused_ordering(669)
00:11:51.683  fused_ordering(670)
00:11:51.683  fused_ordering(671)
00:11:51.683  fused_ordering(672)
00:11:51.683  fused_ordering(673)
00:11:51.683  fused_ordering(674)
00:11:51.683  fused_ordering(675)
00:11:51.683  fused_ordering(676)
00:11:51.683  fused_ordering(677)
00:11:51.683  fused_ordering(678)
00:11:51.683  fused_ordering(679)
00:11:51.683  fused_ordering(680)
00:11:51.683  fused_ordering(681)
00:11:51.683  fused_ordering(682)
00:11:51.683  fused_ordering(683)
00:11:51.683  fused_ordering(684)
00:11:51.683  fused_ordering(685)
00:11:51.683  fused_ordering(686)
00:11:51.683  fused_ordering(687)
00:11:51.683  fused_ordering(688)
00:11:51.683  fused_ordering(689)
00:11:51.683  fused_ordering(690)
00:11:51.683  fused_ordering(691)
00:11:51.683  fused_ordering(692)
00:11:51.683  fused_ordering(693)
00:11:51.683  fused_ordering(694)
00:11:51.683  fused_ordering(695)
00:11:51.683  fused_ordering(696)
00:11:51.683  fused_ordering(697)
00:11:51.683  fused_ordering(698)
00:11:51.683  fused_ordering(699)
00:11:51.683  fused_ordering(700)
00:11:51.683  fused_ordering(701)
00:11:51.683  fused_ordering(702)
00:11:51.683  fused_ordering(703)
00:11:51.683  fused_ordering(704)
00:11:51.683  fused_ordering(705)
00:11:51.683  fused_ordering(706)
00:11:51.683  fused_ordering(707)
00:11:51.683  fused_ordering(708)
00:11:51.683  fused_ordering(709)
00:11:51.683  fused_ordering(710)
00:11:51.683  fused_ordering(711)
00:11:51.683  fused_ordering(712)
00:11:51.683  fused_ordering(713)
00:11:51.683  fused_ordering(714)
00:11:51.683  fused_ordering(715)
00:11:51.683  fused_ordering(716)
00:11:51.683  fused_ordering(717)
00:11:51.683  fused_ordering(718)
00:11:51.683  fused_ordering(719)
00:11:51.683  fused_ordering(720)
00:11:51.683  fused_ordering(721)
00:11:51.683  fused_ordering(722)
00:11:51.683  fused_ordering(723)
00:11:51.683  fused_ordering(724)
00:11:51.683  fused_ordering(725)
00:11:51.683  fused_ordering(726)
00:11:51.683  fused_ordering(727)
00:11:51.683  fused_ordering(728)
00:11:51.683  fused_ordering(729)
00:11:51.683  fused_ordering(730)
00:11:51.683  fused_ordering(731)
00:11:51.683  fused_ordering(732)
00:11:51.683  fused_ordering(733)
00:11:51.683  fused_ordering(734)
00:11:51.683  fused_ordering(735)
00:11:51.683  fused_ordering(736)
00:11:51.683  fused_ordering(737)
00:11:51.683  fused_ordering(738)
00:11:51.683  fused_ordering(739)
00:11:51.683  fused_ordering(740)
00:11:51.683  fused_ordering(741)
00:11:51.683  fused_ordering(742)
00:11:51.683  fused_ordering(743)
00:11:51.683  fused_ordering(744)
00:11:51.683  fused_ordering(745)
00:11:51.683  fused_ordering(746)
00:11:51.683  fused_ordering(747)
00:11:51.683  fused_ordering(748)
00:11:51.683  fused_ordering(749)
00:11:51.683  fused_ordering(750)
00:11:51.683  fused_ordering(751)
00:11:51.683  fused_ordering(752)
00:11:51.683  fused_ordering(753)
00:11:51.683  fused_ordering(754)
00:11:51.683  fused_ordering(755)
00:11:51.683  fused_ordering(756)
00:11:51.683  fused_ordering(757)
00:11:51.683  fused_ordering(758)
00:11:51.683  fused_ordering(759)
00:11:51.683  fused_ordering(760)
00:11:51.683  fused_ordering(761)
00:11:51.683  fused_ordering(762)
00:11:51.683  fused_ordering(763)
00:11:51.683  fused_ordering(764)
00:11:51.683  fused_ordering(765)
00:11:51.683  fused_ordering(766)
00:11:51.683  fused_ordering(767)
00:11:51.683  fused_ordering(768)
00:11:51.683  fused_ordering(769)
00:11:51.683  fused_ordering(770)
00:11:51.683  fused_ordering(771)
00:11:51.683  fused_ordering(772)
00:11:51.683  fused_ordering(773)
00:11:51.683  fused_ordering(774)
00:11:51.683  fused_ordering(775)
00:11:51.683  fused_ordering(776)
00:11:51.683  fused_ordering(777)
00:11:51.683  fused_ordering(778)
00:11:51.683  fused_ordering(779)
00:11:51.683  fused_ordering(780)
00:11:51.683  fused_ordering(781)
00:11:51.683  fused_ordering(782)
00:11:51.683  fused_ordering(783)
00:11:51.683  fused_ordering(784)
00:11:51.683  fused_ordering(785)
00:11:51.683  fused_ordering(786)
00:11:51.683  fused_ordering(787)
00:11:51.683  fused_ordering(788)
00:11:51.683  fused_ordering(789)
00:11:51.683  fused_ordering(790)
00:11:51.683  fused_ordering(791)
00:11:51.683  fused_ordering(792)
00:11:51.683  fused_ordering(793)
00:11:51.683  fused_ordering(794)
00:11:51.683  fused_ordering(795)
00:11:51.683  fused_ordering(796)
00:11:51.683  fused_ordering(797)
00:11:51.683  fused_ordering(798)
00:11:51.683  fused_ordering(799)
00:11:51.683  fused_ordering(800)
00:11:51.683  fused_ordering(801)
00:11:51.683  fused_ordering(802)
00:11:51.683  fused_ordering(803)
00:11:51.683  fused_ordering(804)
00:11:51.683  fused_ordering(805)
00:11:51.683  fused_ordering(806)
00:11:51.683  fused_ordering(807)
00:11:51.683  fused_ordering(808)
00:11:51.683  fused_ordering(809)
00:11:51.683  fused_ordering(810)
00:11:51.683  fused_ordering(811)
00:11:51.683  fused_ordering(812)
00:11:51.683  fused_ordering(813)
00:11:51.683  fused_ordering(814)
00:11:51.683  fused_ordering(815)
00:11:51.683  fused_ordering(816)
00:11:51.683  fused_ordering(817)
00:11:51.683  fused_ordering(818)
00:11:51.683  fused_ordering(819)
00:11:51.683  fused_ordering(820)
00:11:52.249  fused_ordering(821)
00:11:52.249  fused_ordering(822)
00:11:52.249  fused_ordering(823)
00:11:52.249  fused_ordering(824)
00:11:52.249  fused_ordering(825)
00:11:52.249  fused_ordering(826)
00:11:52.249  fused_ordering(827)
00:11:52.249  fused_ordering(828)
00:11:52.249  fused_ordering(829)
00:11:52.249  fused_ordering(830)
00:11:52.249  fused_ordering(831)
00:11:52.249  fused_ordering(832)
00:11:52.249  fused_ordering(833)
00:11:52.249  fused_ordering(834)
00:11:52.249  fused_ordering(835)
00:11:52.249  fused_ordering(836)
00:11:52.249  fused_ordering(837)
00:11:52.249  fused_ordering(838)
00:11:52.249  fused_ordering(839)
00:11:52.249  fused_ordering(840)
00:11:52.249  fused_ordering(841)
00:11:52.249  fused_ordering(842)
00:11:52.249  fused_ordering(843)
00:11:52.249  fused_ordering(844)
00:11:52.249  fused_ordering(845)
00:11:52.249  fused_ordering(846)
00:11:52.249  fused_ordering(847)
00:11:52.249  fused_ordering(848)
00:11:52.249  fused_ordering(849)
00:11:52.249  fused_ordering(850)
00:11:52.249  fused_ordering(851)
00:11:52.249  fused_ordering(852)
00:11:52.249  fused_ordering(853)
00:11:52.249  fused_ordering(854)
00:11:52.249  fused_ordering(855)
00:11:52.249  fused_ordering(856)
00:11:52.249  fused_ordering(857)
00:11:52.249  fused_ordering(858)
00:11:52.249  fused_ordering(859)
00:11:52.249  fused_ordering(860)
00:11:52.249  fused_ordering(861)
00:11:52.249  fused_ordering(862)
00:11:52.249  fused_ordering(863)
00:11:52.249  fused_ordering(864)
00:11:52.249  fused_ordering(865)
00:11:52.249  fused_ordering(866)
00:11:52.249  fused_ordering(867)
00:11:52.249  fused_ordering(868)
00:11:52.249  fused_ordering(869)
00:11:52.249  fused_ordering(870)
00:11:52.249  fused_ordering(871)
00:11:52.249  fused_ordering(872)
00:11:52.249  fused_ordering(873)
00:11:52.249  fused_ordering(874)
00:11:52.249  fused_ordering(875)
00:11:52.249  fused_ordering(876)
00:11:52.249  fused_ordering(877)
00:11:52.249  fused_ordering(878)
00:11:52.249  fused_ordering(879)
00:11:52.249  fused_ordering(880)
00:11:52.249  fused_ordering(881)
00:11:52.249  fused_ordering(882)
00:11:52.249  fused_ordering(883)
00:11:52.249  fused_ordering(884)
00:11:52.249  fused_ordering(885)
00:11:52.249  fused_ordering(886)
00:11:52.249  fused_ordering(887)
00:11:52.249  fused_ordering(888)
00:11:52.249  fused_ordering(889)
00:11:52.249  fused_ordering(890)
00:11:52.249  fused_ordering(891)
00:11:52.249  fused_ordering(892)
00:11:52.249  fused_ordering(893)
00:11:52.249  fused_ordering(894)
00:11:52.249  fused_ordering(895)
00:11:52.249  fused_ordering(896)
00:11:52.249  fused_ordering(897)
00:11:52.249  fused_ordering(898)
00:11:52.249  fused_ordering(899)
00:11:52.249  fused_ordering(900)
00:11:52.249  fused_ordering(901)
00:11:52.249  fused_ordering(902)
00:11:52.249  fused_ordering(903)
00:11:52.249  fused_ordering(904)
00:11:52.249  fused_ordering(905)
00:11:52.249  fused_ordering(906)
00:11:52.249  fused_ordering(907)
00:11:52.249  fused_ordering(908)
00:11:52.249  fused_ordering(909)
00:11:52.249  fused_ordering(910)
00:11:52.249  fused_ordering(911)
00:11:52.249  fused_ordering(912)
00:11:52.249  fused_ordering(913)
00:11:52.249  fused_ordering(914)
00:11:52.249  fused_ordering(915)
00:11:52.249  fused_ordering(916)
00:11:52.249  fused_ordering(917)
00:11:52.249  fused_ordering(918)
00:11:52.249  fused_ordering(919)
00:11:52.249  fused_ordering(920)
00:11:52.249  fused_ordering(921)
00:11:52.249  fused_ordering(922)
00:11:52.249  fused_ordering(923)
00:11:52.249  fused_ordering(924)
00:11:52.250  fused_ordering(925)
00:11:52.250  fused_ordering(926)
00:11:52.250  fused_ordering(927)
00:11:52.250  fused_ordering(928)
00:11:52.250  fused_ordering(929)
00:11:52.250  fused_ordering(930)
00:11:52.250  fused_ordering(931)
00:11:52.250  fused_ordering(932)
00:11:52.250  fused_ordering(933)
00:11:52.250  fused_ordering(934)
00:11:52.250  fused_ordering(935)
00:11:52.250  fused_ordering(936)
00:11:52.250  fused_ordering(937)
00:11:52.250  fused_ordering(938)
00:11:52.250  fused_ordering(939)
00:11:52.250  fused_ordering(940)
00:11:52.250  fused_ordering(941)
00:11:52.250  fused_ordering(942)
00:11:52.250  fused_ordering(943)
00:11:52.250  fused_ordering(944)
00:11:52.250  fused_ordering(945)
00:11:52.250  fused_ordering(946)
00:11:52.250  fused_ordering(947)
00:11:52.250  fused_ordering(948)
00:11:52.250  fused_ordering(949)
00:11:52.250  fused_ordering(950)
00:11:52.250  fused_ordering(951)
00:11:52.250  fused_ordering(952)
00:11:52.250  fused_ordering(953)
00:11:52.250  fused_ordering(954)
00:11:52.250  fused_ordering(955)
00:11:52.250  fused_ordering(956)
00:11:52.250  fused_ordering(957)
00:11:52.250  fused_ordering(958)
00:11:52.250  fused_ordering(959)
00:11:52.250  fused_ordering(960)
00:11:52.250  fused_ordering(961)
00:11:52.250  fused_ordering(962)
00:11:52.250  fused_ordering(963)
00:11:52.250  fused_ordering(964)
00:11:52.250  fused_ordering(965)
00:11:52.250  fused_ordering(966)
00:11:52.250  fused_ordering(967)
00:11:52.250  fused_ordering(968)
00:11:52.250  fused_ordering(969)
00:11:52.250  fused_ordering(970)
00:11:52.250  fused_ordering(971)
00:11:52.250  fused_ordering(972)
00:11:52.250  fused_ordering(973)
00:11:52.250  fused_ordering(974)
00:11:52.250  fused_ordering(975)
00:11:52.250  fused_ordering(976)
00:11:52.250  fused_ordering(977)
00:11:52.250  fused_ordering(978)
00:11:52.250  fused_ordering(979)
00:11:52.250  fused_ordering(980)
00:11:52.250  fused_ordering(981)
00:11:52.250  fused_ordering(982)
00:11:52.250  fused_ordering(983)
00:11:52.250  fused_ordering(984)
00:11:52.250  fused_ordering(985)
00:11:52.250  fused_ordering(986)
00:11:52.250  fused_ordering(987)
00:11:52.250  fused_ordering(988)
00:11:52.250  fused_ordering(989)
00:11:52.250  fused_ordering(990)
00:11:52.250  fused_ordering(991)
00:11:52.250  fused_ordering(992)
00:11:52.250  fused_ordering(993)
00:11:52.250  fused_ordering(994)
00:11:52.250  fused_ordering(995)
00:11:52.250  fused_ordering(996)
00:11:52.250  fused_ordering(997)
00:11:52.250  fused_ordering(998)
00:11:52.250  fused_ordering(999)
00:11:52.250  fused_ordering(1000)
00:11:52.250  fused_ordering(1001)
00:11:52.250  fused_ordering(1002)
00:11:52.250  fused_ordering(1003)
00:11:52.250  fused_ordering(1004)
00:11:52.250  fused_ordering(1005)
00:11:52.250  fused_ordering(1006)
00:11:52.250  fused_ordering(1007)
00:11:52.250  fused_ordering(1008)
00:11:52.250  fused_ordering(1009)
00:11:52.250  fused_ordering(1010)
00:11:52.250  fused_ordering(1011)
00:11:52.250  fused_ordering(1012)
00:11:52.250  fused_ordering(1013)
00:11:52.250  fused_ordering(1014)
00:11:52.250  fused_ordering(1015)
00:11:52.250  fused_ordering(1016)
00:11:52.250  fused_ordering(1017)
00:11:52.250  fused_ordering(1018)
00:11:52.250  fused_ordering(1019)
00:11:52.250  fused_ordering(1020)
00:11:52.250  fused_ordering(1021)
00:11:52.250  fused_ordering(1022)
00:11:52.250  fused_ordering(1023)
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:52.250  rmmod nvme_tcp
00:11:52.250  rmmod nvme_fabrics
00:11:52.250  rmmod nvme_keyring
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 196235 ']'
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 196235
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 196235 ']'
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 196235
00:11:52.250    04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:52.250    04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 196235
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 196235'
00:11:52.250  killing process with pid 196235
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 196235
00:11:52.250   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 196235
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:52.510   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns
00:11:52.511   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:52.511   04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:52.511    04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:54.418  
00:11:54.418  real	0m7.362s
00:11:54.418  user	0m4.854s
00:11:54.418  sys	0m2.789s
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:11:54.418  ************************************
00:11:54.418  END TEST nvmf_fused_ordering
00:11:54.418  ************************************
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:11:54.418  ************************************
00:11:54.418  START TEST nvmf_ns_masking
00:11:54.418  ************************************
00:11:54.418   04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp
00:11:54.678  * Looking for test storage...
00:11:54.678  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-:
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-:
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<'
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:54.678     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:54.678    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:54.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.678  		--rc genhtml_branch_coverage=1
00:11:54.678  		--rc genhtml_function_coverage=1
00:11:54.678  		--rc genhtml_legend=1
00:11:54.678  		--rc geninfo_all_blocks=1
00:11:54.678  		--rc geninfo_unexecuted_blocks=1
00:11:54.678  		
00:11:54.678  		'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:54.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.679  		--rc genhtml_branch_coverage=1
00:11:54.679  		--rc genhtml_function_coverage=1
00:11:54.679  		--rc genhtml_legend=1
00:11:54.679  		--rc geninfo_all_blocks=1
00:11:54.679  		--rc geninfo_unexecuted_blocks=1
00:11:54.679  		
00:11:54.679  		'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:54.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.679  		--rc genhtml_branch_coverage=1
00:11:54.679  		--rc genhtml_function_coverage=1
00:11:54.679  		--rc genhtml_legend=1
00:11:54.679  		--rc geninfo_all_blocks=1
00:11:54.679  		--rc geninfo_unexecuted_blocks=1
00:11:54.679  		
00:11:54.679  		'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:54.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.679  		--rc genhtml_branch_coverage=1
00:11:54.679  		--rc genhtml_function_coverage=1
00:11:54.679  		--rc genhtml_legend=1
00:11:54.679  		--rc geninfo_all_blocks=1
00:11:54.679  		--rc geninfo_unexecuted_blocks=1
00:11:54.679  		
00:11:54.679  		'
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:54.679     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:54.679     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:54.679     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob
00:11:54.679     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:54.679     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:54.679     04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:54.679      04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.679      04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.679      04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.679      04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH
00:11:54.679      04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:54.679  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=39eb07b0-72b5-4607-bbc7-0ec9610a4911
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:54.679    04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable
00:11:54.679   04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=()
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=()
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=()
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=()
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=()
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:11:57.218  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:11:57.218  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:11:57.218  Found net devices under 0000:0a:00.0: cvl_0_0
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:11:57.218  Found net devices under 0000:0a:00.1: cvl_0_1
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:57.218   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:57.219  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:57.219  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms
00:11:57.219  
00:11:57.219  --- 10.0.0.2 ping statistics ---
00:11:57.219  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:57.219  rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:57.219  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:57.219  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms
00:11:57.219  
00:11:57.219  --- 10.0.0.1 ping statistics ---
00:11:57.219  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:57.219  rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=198577
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 198577
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 198577 ']'
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:57.219  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:57.219   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:11:57.219  [2024-12-09 04:02:25.597903] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:11:57.219  [2024-12-09 04:02:25.597993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:57.219  [2024-12-09 04:02:25.671489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:57.219  [2024-12-09 04:02:25.729424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:57.219  [2024-12-09 04:02:25.729496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:57.219  [2024-12-09 04:02:25.729526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:57.219  [2024-12-09 04:02:25.729537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:57.219  [2024-12-09 04:02:25.729547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:57.219  [2024-12-09 04:02:25.730216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:57.477   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:57.477   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:11:57.477   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:57.477   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:57.477   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:11:57.477   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:57.477   04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:11:57.735  [2024-12-09 04:02:26.128138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:57.735   04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64
00:11:57.735   04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512
00:11:57.735   04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:11:57.993  Malloc1
00:11:57.993   04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:11:58.251  Malloc2
00:11:58.251   04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:11:58.508   04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1
00:11:58.767   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:59.024  [2024-12-09 04:02:27.515788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:59.024   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect
00:11:59.024   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9 -a 10.0.0.2 -s 4420 -i 4
00:11:59.280   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME
00:11:59.280   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:11:59.280   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:59.280   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:59.280   04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:12:01.178   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:12:01.178    04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:12:01.178    04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:12:01.178   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:12:01.178   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:12:01.178   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:12:01.178    04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:12:01.178    04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:01.435  [   0]:0x1
00:12:01.435    04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:01.435    04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:01.435   04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:01.693  [   0]:0x1
00:12:01.693    04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:01.693    04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:01.693   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:12:01.693  [   1]:0x2
00:12:01.693    04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:12:01.693    04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:01.952   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7
00:12:01.952   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:01.952   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect
00:12:01.952   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:12:01.952  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:01.952   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:02.210   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible
00:12:02.468   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1
00:12:02.468   04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9 -a 10.0.0.2 -s 4420 -i 4
00:12:02.732   04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1
00:12:02.732   04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:12:02.732   04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:12:02.732   04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]]
00:12:02.732   04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1
00:12:02.732   04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:12:04.633    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:12:04.633    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:12:04.633    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:12:04.633    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:04.633    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:04.633   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:04.892    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:04.892    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:12:04.892  [   0]:0x2
00:12:04.892    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:12:04.892    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:04.892   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:12:05.151   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1
00:12:05.151   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:05.151   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:05.151  [   0]:0x1
00:12:05.151    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:05.151    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:05.409   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712
00:12:05.410   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:05.410   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2
00:12:05.410   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:05.410   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:12:05.410  [   1]:0x2
00:12:05.410    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:12:05.410    04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:05.410   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7
00:12:05.410   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:05.410   04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:05.668    04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:05.668    04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:05.668    04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:12:05.668  [   0]:0x2
00:12:05.668    04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:12:05.668    04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:12:05.668  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:05.668   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:12:05.926   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2
00:12:05.926   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9 -a 10.0.0.2 -s 4420 -i 4
00:12:06.185   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2
00:12:06.185   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:12:06.185   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:12:06.185   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]]
00:12:06.185   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2
00:12:06.185   04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:12:08.090   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:12:08.090    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:12:08.090    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:12:08.090   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2
00:12:08.090   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:12:08.090   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:12:08.090    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:12:08.090    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:08.347  [   0]:0x1
00:12:08.347    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:08.347    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:08.347   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:12:08.347  [   1]:0x2
00:12:08.347    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:12:08.347    04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:08.604   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7
00:12:08.604   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:08.604   04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:12:08.862   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1
00:12:08.862   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:12:08.862   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:12:08.862   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:08.863    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:08.863    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:08.863    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:12:08.863  [   0]:0x2
00:12:08.863    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:12:08.863    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:08.863    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:08.863    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:12:08.863   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:12:09.120  [2024-12-09 04:02:37.597882] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2
00:12:09.120  request:
00:12:09.120  {
00:12:09.120    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:12:09.120    "nsid": 2,
00:12:09.120    "host": "nqn.2016-06.io.spdk:host1",
00:12:09.120    "method": "nvmf_ns_remove_host",
00:12:09.120    "req_id": 1
00:12:09.120  }
00:12:09.120  Got JSON-RPC error response
00:12:09.120  response:
00:12:09.120  {
00:12:09.120    "code": -32602,
00:12:09.120    "message": "Invalid parameters"
00:12:09.120  }
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:09.120    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:09.120   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:12:09.120    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:12:09.120    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:12:09.121   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:12:09.121  [   0]:0x2
00:12:09.121    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:12:09.121    04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:12:09.378  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=200183
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 200183 /var/tmp/host.sock
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 200183 ']'
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:12:09.378  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:09.378   04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:12:09.378  [2024-12-09 04:02:37.812829] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:12:09.378  [2024-12-09 04:02:37.812908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200183 ]
00:12:09.378  [2024-12-09 04:02:37.878354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:09.378  [2024-12-09 04:02:37.935405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:09.637   04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:09.637   04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:12:09.637   04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:10.203   04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:10.203    04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c
00:12:10.203    04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:12:10.203   04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C -i
00:12:10.768    04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 39eb07b0-72b5-4607-bbc7-0ec9610a4911
00:12:10.768    04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:12:10.768   04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 39EB07B072B54607BBC70EC9610A4911 -i
00:12:10.768   04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:12:11.334   04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2
00:12:11.334   04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:12:11.334   04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:12:11.900  nvme0n1
00:12:11.900   04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:12:11.901   04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:12:12.159  nvme1n2
00:12:12.159    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs
00:12:12.159    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:12:12.159    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name'
00:12:12.159    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort
00:12:12.159    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs
00:12:12.417   04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]]
00:12:12.675    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1
00:12:12.675    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid'
00:12:12.675    04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1
00:12:12.933   04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c == \9\7\3\6\7\a\b\f\-\d\3\a\e\-\4\d\4\b\-\a\0\d\c\-\e\d\6\f\6\b\5\7\c\9\7\c ]]
00:12:12.933    04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2
00:12:12.933    04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid'
00:12:12.933    04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2
00:12:13.192   04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 39eb07b0-72b5-4607-bbc7-0ec9610a4911 == \3\9\e\b\0\7\b\0\-\7\2\b\5\-\4\6\0\7\-\b\b\c\7\-\0\e\c\9\6\1\0\a\4\9\1\1 ]]
00:12:13.192   04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:13.450   04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:13.708    04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c
00:12:13.708    04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:13.708    04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:13.708    04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:12:13.708   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C
00:12:13.966  [2024-12-09 04:02:42.355791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid
00:12:13.966  [2024-12-09 04:02:42.355833] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19
00:12:13.966  [2024-12-09 04:02:42.355863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:12:13.966  request:
00:12:13.966  {
00:12:13.966    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:12:13.966    "namespace": {
00:12:13.966      "bdev_name": "invalid",
00:12:13.966      "nsid": 1,
00:12:13.966      "nguid": "97367ABFD3AE4D4BA0DCED6F6B57C97C",
00:12:13.966      "no_auto_visible": false,
00:12:13.966      "hide_metadata": false
00:12:13.966    },
00:12:13.966    "method": "nvmf_subsystem_add_ns",
00:12:13.966    "req_id": 1
00:12:13.966  }
00:12:13.966  Got JSON-RPC error response
00:12:13.966  response:
00:12:13.966  {
00:12:13.966    "code": -32602,
00:12:13.966    "message": "Invalid parameters"
00:12:13.966  }
00:12:13.966   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:12:13.966   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:13.966   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:13.966   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:13.966    04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c
00:12:13.966    04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:12:13.966   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C -i
00:12:14.224   04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s
00:12:16.127    04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs
00:12:16.127    04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length
00:12:16.127    04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:12:16.385   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 ))
00:12:16.385   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 200183
00:12:16.385   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 200183 ']'
00:12:16.385   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 200183
00:12:16.385    04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:12:16.385   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:16.385    04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 200183
00:12:16.646   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:12:16.646   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:12:16.646   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 200183'
00:12:16.646  killing process with pid 200183
00:12:16.646   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 200183
00:12:16.646   04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 200183
00:12:16.905   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:17.162   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:12:17.162  rmmod nvme_tcp
00:12:17.162  rmmod nvme_fabrics
00:12:17.162  rmmod nvme_keyring
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 198577 ']'
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 198577
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 198577 ']'
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 198577
00:12:17.420    04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:17.420    04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 198577
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 198577'
00:12:17.420  killing process with pid 198577
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 198577
00:12:17.420   04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 198577
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:17.680   04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:17.680    04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:12:19.586  
00:12:19.586  real	0m25.120s
00:12:19.586  user	0m36.274s
00:12:19.586  sys	0m4.824s
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:12:19.586  ************************************
00:12:19.586  END TEST nvmf_ns_masking
00:12:19.586  ************************************
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]]
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:19.586   04:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:19.845  ************************************
00:12:19.845  START TEST nvmf_nvme_cli
00:12:19.845  ************************************
00:12:19.845   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp
00:12:19.845  * Looking for test storage...
00:12:19.845  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-:
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-:
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<'
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:19.845     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:19.845  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:19.845  		--rc genhtml_branch_coverage=1
00:12:19.845  		--rc genhtml_function_coverage=1
00:12:19.845  		--rc genhtml_legend=1
00:12:19.845  		--rc geninfo_all_blocks=1
00:12:19.845  		--rc geninfo_unexecuted_blocks=1
00:12:19.845  		
00:12:19.845  		'
00:12:19.845    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:19.845  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:19.845  		--rc genhtml_branch_coverage=1
00:12:19.845  		--rc genhtml_function_coverage=1
00:12:19.845  		--rc genhtml_legend=1
00:12:19.845  		--rc geninfo_all_blocks=1
00:12:19.846  		--rc geninfo_unexecuted_blocks=1
00:12:19.846  		
00:12:19.846  		'
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:19.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:19.846  		--rc genhtml_branch_coverage=1
00:12:19.846  		--rc genhtml_function_coverage=1
00:12:19.846  		--rc genhtml_legend=1
00:12:19.846  		--rc geninfo_all_blocks=1
00:12:19.846  		--rc geninfo_unexecuted_blocks=1
00:12:19.846  		
00:12:19.846  		'
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:19.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:19.846  		--rc genhtml_branch_coverage=1
00:12:19.846  		--rc genhtml_function_coverage=1
00:12:19.846  		--rc genhtml_legend=1
00:12:19.846  		--rc geninfo_all_blocks=1
00:12:19.846  		--rc geninfo_unexecuted_blocks=1
00:12:19.846  		
00:12:19.846  		'
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:12:19.846     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:19.846     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:12:19.846     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob
00:12:19.846     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:19.846     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:19.846     04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:19.846      04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:19.846      04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:19.846      04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:19.846      04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH
00:12:19.846      04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:19.846  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=()
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:19.846    04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable
00:12:19.846   04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=()
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=()
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=()
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=()
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=()
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=()
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=()
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:12:22.382   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:12:22.383  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:12:22.383  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:12:22.383  Found net devices under 0000:0a:00.0: cvl_0_0
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:12:22.383  Found net devices under 0000:0a:00.1: cvl_0_1
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:12:22.383  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:22.383  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms
00:12:22.383  
00:12:22.383  --- 10.0.0.2 ping statistics ---
00:12:22.383  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:22.383  rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:12:22.383  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:22.383  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms
00:12:22.383  
00:12:22.383  --- 10.0.0.1 ping statistics ---
00:12:22.383  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:22.383  rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=203110
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 203110
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 203110 ']'
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:22.383  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:22.383   04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.383  [2024-12-09 04:02:50.764901] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:12:22.384  [2024-12-09 04:02:50.764977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:22.384  [2024-12-09 04:02:50.835050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:22.384  [2024-12-09 04:02:50.890089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:22.384  [2024-12-09 04:02:50.890148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:22.384  [2024-12-09 04:02:50.890176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:22.384  [2024-12-09 04:02:50.890186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:22.384  [2024-12-09 04:02:50.890195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:22.384  [2024-12-09 04:02:50.891863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:22.384  [2024-12-09 04:02:50.891945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:22.384  [2024-12-09 04:02:50.892053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:12:22.384  [2024-12-09 04:02:50.892056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.642  [2024-12-09 04:02:51.032969] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.642  Malloc0
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.642  Malloc1
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291
00:12:22.642   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.643  [2024-12-09 04:02:51.132716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:22.643   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420
00:12:22.901  
00:12:22.901  Discovery Log Number of Records 2, Generation counter 2
00:12:22.901  =====Discovery Log Entry 0======
00:12:22.901  trtype:  tcp
00:12:22.901  adrfam:  ipv4
00:12:22.901  subtype: current discovery subsystem
00:12:22.901  treq:    not required
00:12:22.901  portid:  0
00:12:22.901  trsvcid: 4420
00:12:22.901  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:12:22.901  traddr:  10.0.0.2
00:12:22.901  eflags:  explicit discovery connections, duplicate discovery information
00:12:22.901  sectype: none
00:12:22.901  =====Discovery Log Entry 1======
00:12:22.901  trtype:  tcp
00:12:22.901  adrfam:  ipv4
00:12:22.901  subtype: nvme subsystem
00:12:22.901  treq:    not required
00:12:22.901  portid:  0
00:12:22.901  trsvcid: 4420
00:12:22.901  subnqn:  nqn.2016-06.io.spdk:cnode1
00:12:22.901  traddr:  10.0.0.2
00:12:22.901  eflags:  none
00:12:22.901  sectype: none
00:12:22.901   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs))
00:12:22.901    04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs
00:12:22.901    04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:12:22.901    04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:22.901     04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:12:22.901    04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:12:22.901    04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:22.901    04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:12:22.901    04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:22.901   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0
00:12:22.901   04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:12:23.469   04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2
00:12:23.469   04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0
00:12:23.469   04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:12:23.469   04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]]
00:12:23.469   04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2
00:12:23.469   04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999     04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1
00:12:25.999  /dev/nvme0n2 ]]
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs))
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999     04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]]
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:12:25.999  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection ))
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:12:25.999  rmmod nvme_tcp
00:12:25.999  rmmod nvme_fabrics
00:12:25.999  rmmod nvme_keyring
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 203110 ']'
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 203110
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 203110 ']'
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 203110
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203110
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203110'
00:12:25.999  killing process with pid 203110
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 203110
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 203110
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:25.999   04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:25.999    04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:12:28.543  
00:12:28.543  real	0m8.447s
00:12:28.543  user	0m15.206s
00:12:28.543  sys	0m2.427s
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:12:28.543  ************************************
00:12:28.543  END TEST nvmf_nvme_cli
00:12:28.543  ************************************
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]]
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:28.543  ************************************
00:12:28.543  START TEST nvmf_vfio_user
00:12:28.543  ************************************
00:12:28.543   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:12:28.543  * Looking for test storage...
00:12:28.543  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-:
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-:
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<'
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:28.543     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2
00:12:28.543    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:28.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:28.544  		--rc genhtml_branch_coverage=1
00:12:28.544  		--rc genhtml_function_coverage=1
00:12:28.544  		--rc genhtml_legend=1
00:12:28.544  		--rc geninfo_all_blocks=1
00:12:28.544  		--rc geninfo_unexecuted_blocks=1
00:12:28.544  		
00:12:28.544  		'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:28.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:28.544  		--rc genhtml_branch_coverage=1
00:12:28.544  		--rc genhtml_function_coverage=1
00:12:28.544  		--rc genhtml_legend=1
00:12:28.544  		--rc geninfo_all_blocks=1
00:12:28.544  		--rc geninfo_unexecuted_blocks=1
00:12:28.544  		
00:12:28.544  		'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:28.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:28.544  		--rc genhtml_branch_coverage=1
00:12:28.544  		--rc genhtml_function_coverage=1
00:12:28.544  		--rc genhtml_legend=1
00:12:28.544  		--rc geninfo_all_blocks=1
00:12:28.544  		--rc geninfo_unexecuted_blocks=1
00:12:28.544  		
00:12:28.544  		'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:28.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:28.544  		--rc genhtml_branch_coverage=1
00:12:28.544  		--rc genhtml_function_coverage=1
00:12:28.544  		--rc genhtml_legend=1
00:12:28.544  		--rc geninfo_all_blocks=1
00:12:28.544  		--rc geninfo_unexecuted_blocks=1
00:12:28.544  		
00:12:28.544  		'
00:12:28.544   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:12:28.544     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:28.544     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:12:28.544     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob
00:12:28.544     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:28.544     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:28.544     04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:28.544      04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:28.544      04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:28.544      04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:28.544      04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH
00:12:28.544      04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:28.544  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:28.544    04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:28.544   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64
00:12:28.544   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' ''
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args=
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=203925
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]'
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 203925'
00:12:28.545  Process pid: 203925
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 203925
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 203925 ']'
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:28.545  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:28.545   04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:12:28.545  [2024-12-09 04:02:56.872529] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:12:28.545  [2024-12-09 04:02:56.872638] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:28.545  [2024-12-09 04:02:56.940581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:28.545  [2024-12-09 04:02:57.002640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:28.545  [2024-12-09 04:02:57.002695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:28.545  [2024-12-09 04:02:57.002709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:28.545  [2024-12-09 04:02:57.002720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:28.545  [2024-12-09 04:02:57.002730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:28.545  [2024-12-09 04:02:57.004343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:28.545  [2024-12-09 04:02:57.004403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:28.545  [2024-12-09 04:02:57.004463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:12:28.545  [2024-12-09 04:02:57.004467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:28.803   04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:28.803   04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0
00:12:28.803   04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:12:29.733   04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER
00:12:29.989   04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:12:29.989    04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:12:29.989   04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:12:29.989   04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:12:29.989   04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:12:30.247  Malloc1
00:12:30.247   04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:12:30.504   04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:12:30.761   04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:12:31.017   04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:12:31.017   04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:12:31.017   04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:12:31.275  Malloc2
00:12:31.275   04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:12:31.534   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:12:32.100   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:12:32.100   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user
00:12:32.100    04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2
00:12:32.100   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:12:32.100   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1
00:12:32.100   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1
00:12:32.100   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci
00:12:32.361  [2024-12-09 04:03:00.680428] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:12:32.361  [2024-12-09 04:03:00.680471] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204512 ]
00:12:32.361  [2024-12-09 04:03:00.730762] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1
00:12:32.361  [2024-12-09 04:03:00.739761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:12:32.361  [2024-12-09 04:03:00.739794] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f39a97ae000
00:12:32.361  [2024-12-09 04:03:00.740751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.741750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.742756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.743762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.744762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.745765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.746771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.747777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:32.361  [2024-12-09 04:03:00.748781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:12:32.361  [2024-12-09 04:03:00.748802] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f39a97a3000
00:12:32.361  [2024-12-09 04:03:00.749921] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:12:32.361  [2024-12-09 04:03:00.764962] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully
00:12:32.361  [2024-12-09 04:03:00.765006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout)
00:12:32.361  [2024-12-09 04:03:00.769911] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:12:32.361  [2024-12-09 04:03:00.769965] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:12:32.361  [2024-12-09 04:03:00.770055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout)
00:12:32.361  [2024-12-09 04:03:00.770081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout)
00:12:32.361  [2024-12-09 04:03:00.770093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout)
00:12:32.361  [2024-12-09 04:03:00.770906] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300
00:12:32.361  [2024-12-09 04:03:00.770930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout)
00:12:32.361  [2024-12-09 04:03:00.770945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout)
00:12:32.361  [2024-12-09 04:03:00.771907] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:12:32.361  [2024-12-09 04:03:00.771925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout)
00:12:32.361  [2024-12-09 04:03:00.771938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms)
00:12:32.361  [2024-12-09 04:03:00.772914] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0
00:12:32.361  [2024-12-09 04:03:00.772932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:12:32.361  [2024-12-09 04:03:00.773918] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0
00:12:32.361  [2024-12-09 04:03:00.773937] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0
00:12:32.361  [2024-12-09 04:03:00.773946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms)
00:12:32.361  [2024-12-09 04:03:00.773957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:12:32.361  [2024-12-09 04:03:00.774067] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1
00:12:32.361  [2024-12-09 04:03:00.774075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:12:32.361  [2024-12-09 04:03:00.774083] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000
00:12:32.361  [2024-12-09 04:03:00.775297] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000
00:12:32.361  [2024-12-09 04:03:00.775926] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff
00:12:32.361  [2024-12-09 04:03:00.776934] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:12:32.361  [2024-12-09 04:03:00.777929] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:32.361  [2024-12-09 04:03:00.778040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:12:32.361  [2024-12-09 04:03:00.778946] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1
00:12:32.362  [2024-12-09 04:03:00.778964] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:12:32.362  [2024-12-09 04:03:00.778973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.778996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout)
00:12:32.362  [2024-12-09 04:03:00.779010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779039] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:12:32.362  [2024-12-09 04:03:00.779049] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:32.362  [2024-12-09 04:03:00.779055] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:32.362  [2024-12-09 04:03:00.779071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.779155] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072
00:12:32.362  [2024-12-09 04:03:00.779165] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072
00:12:32.362  [2024-12-09 04:03:00.779172] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001
00:12:32.362  [2024-12-09 04:03:00.779183] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:12:32.362  [2024-12-09 04:03:00.779191] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1
00:12:32.362  [2024-12-09 04:03:00.779199] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1
00:12:32.362  [2024-12-09 04:03:00.779206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.779291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:32.362  [2024-12-09 04:03:00.779305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:32.362  [2024-12-09 04:03:00.779317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:32.362  [2024-12-09 04:03:00.779329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:32.362  [2024-12-09 04:03:00.779338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.779393] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms
00:12:32.362  [2024-12-09 04:03:00.779401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.779524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779570] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:12:32.362  [2024-12-09 04:03:00.779580] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:12:32.362  [2024-12-09 04:03:00.779591] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:32.362  [2024-12-09 04:03:00.779601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.779655] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added
00:12:32.362  [2024-12-09 04:03:00.779676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779703] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:12:32.362  [2024-12-09 04:03:00.779711] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:32.362  [2024-12-09 04:03:00.779717] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:32.362  [2024-12-09 04:03:00.779726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.779780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779807] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:12:32.362  [2024-12-09 04:03:00.779815] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:32.362  [2024-12-09 04:03:00.779821] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:32.362  [2024-12-09 04:03:00.779830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.779859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779923] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID
00:12:32.362  [2024-12-09 04:03:00.779934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms)
00:12:32.362  [2024-12-09 04:03:00.779942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout)
00:12:32.362  [2024-12-09 04:03:00.779967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.779985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.780004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.780016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.780031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.780042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.780057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:12:32.362  [2024-12-09 04:03:00.780068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:12:32.362  [2024-12-09 04:03:00.780090] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:12:32.362  [2024-12-09 04:03:00.780100] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:12:32.362  [2024-12-09 04:03:00.780106] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:12:32.362  [2024-12-09 04:03:00.780112] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:12:32.362  [2024-12-09 04:03:00.780118] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2
00:12:32.362  [2024-12-09 04:03:00.780127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:12:32.362  [2024-12-09 04:03:00.780138] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:12:32.362  [2024-12-09 04:03:00.780146] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:12:32.362  [2024-12-09 04:03:00.780152] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:32.363  [2024-12-09 04:03:00.780161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:12:32.363  [2024-12-09 04:03:00.780171] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:12:32.363  [2024-12-09 04:03:00.780179] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:32.363  [2024-12-09 04:03:00.780185] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:32.363  [2024-12-09 04:03:00.780193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:32.363  [2024-12-09 04:03:00.780205] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:12:32.363  [2024-12-09 04:03:00.780212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:12:32.363  [2024-12-09 04:03:00.780218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:32.363  [2024-12-09 04:03:00.780226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:12:32.363  [2024-12-09 04:03:00.780237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:12:32.363  [2024-12-09 04:03:00.780285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:12:32.363  [2024-12-09 04:03:00.780307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:12:32.363  [2024-12-09 04:03:00.780320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:12:32.363  =====================================================
00:12:32.363  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:12:32.363  =====================================================
00:12:32.363  Controller Capabilities/Features
00:12:32.363  ================================
00:12:32.363  Vendor ID:                             4e58
00:12:32.363  Subsystem Vendor ID:                   4e58
00:12:32.363  Serial Number:                         SPDK1
00:12:32.363  Model Number:                          SPDK bdev Controller
00:12:32.363  Firmware Version:                      25.01
00:12:32.363  Recommended Arb Burst:                 6
00:12:32.363  IEEE OUI Identifier:                   8d 6b 50
00:12:32.363  Multi-path I/O
00:12:32.363    May have multiple subsystem ports:   Yes
00:12:32.363    May have multiple controllers:       Yes
00:12:32.363    Associated with SR-IOV VF:           No
00:12:32.363  Max Data Transfer Size:                131072
00:12:32.363  Max Number of Namespaces:              32
00:12:32.363  Max Number of I/O Queues:              127
00:12:32.363  NVMe Specification Version (VS):       1.3
00:12:32.363  NVMe Specification Version (Identify): 1.3
00:12:32.363  Maximum Queue Entries:                 256
00:12:32.363  Contiguous Queues Required:            Yes
00:12:32.363  Arbitration Mechanisms Supported
00:12:32.363    Weighted Round Robin:                Not Supported
00:12:32.363    Vendor Specific:                     Not Supported
00:12:32.363  Reset Timeout:                         15000 ms
00:12:32.363  Doorbell Stride:                       4 bytes
00:12:32.363  NVM Subsystem Reset:                   Not Supported
00:12:32.363  Command Sets Supported
00:12:32.363    NVM Command Set:                     Supported
00:12:32.363  Boot Partition:                        Not Supported
00:12:32.363  Memory Page Size Minimum:              4096 bytes
00:12:32.363  Memory Page Size Maximum:              4096 bytes
00:12:32.363  Persistent Memory Region:              Not Supported
00:12:32.363  Optional Asynchronous Events Supported
00:12:32.363    Namespace Attribute Notices:         Supported
00:12:32.363    Firmware Activation Notices:         Not Supported
00:12:32.363    ANA Change Notices:                  Not Supported
00:12:32.363    PLE Aggregate Log Change Notices:    Not Supported
00:12:32.363    LBA Status Info Alert Notices:       Not Supported
00:12:32.363    EGE Aggregate Log Change Notices:    Not Supported
00:12:32.363    Normal NVM Subsystem Shutdown event: Not Supported
00:12:32.363    Zone Descriptor Change Notices:      Not Supported
00:12:32.363    Discovery Log Change Notices:        Not Supported
00:12:32.363  Controller Attributes
00:12:32.363    128-bit Host Identifier:             Supported
00:12:32.363    Non-Operational Permissive Mode:     Not Supported
00:12:32.363    NVM Sets:                            Not Supported
00:12:32.363    Read Recovery Levels:                Not Supported
00:12:32.363    Endurance Groups:                    Not Supported
00:12:32.363    Predictable Latency Mode:            Not Supported
00:12:32.363    Traffic Based Keep ALive:            Not Supported
00:12:32.363    Namespace Granularity:               Not Supported
00:12:32.363    SQ Associations:                     Not Supported
00:12:32.363    UUID List:                           Not Supported
00:12:32.363    Multi-Domain Subsystem:              Not Supported
00:12:32.363    Fixed Capacity Management:           Not Supported
00:12:32.363    Variable Capacity Management:        Not Supported
00:12:32.363    Delete Endurance Group:              Not Supported
00:12:32.363    Delete NVM Set:                      Not Supported
00:12:32.363    Extended LBA Formats Supported:      Not Supported
00:12:32.363    Flexible Data Placement Supported:   Not Supported
00:12:32.363  
00:12:32.363  Controller Memory Buffer Support
00:12:32.363  ================================
00:12:32.363  Supported:                             No
00:12:32.363  
00:12:32.363  Persistent Memory Region Support
00:12:32.363  ================================
00:12:32.363  Supported:                             No
00:12:32.363  
00:12:32.363  Admin Command Set Attributes
00:12:32.363  ============================
00:12:32.363  Security Send/Receive:                 Not Supported
00:12:32.363  Format NVM:                            Not Supported
00:12:32.363  Firmware Activate/Download:            Not Supported
00:12:32.363  Namespace Management:                  Not Supported
00:12:32.363  Device Self-Test:                      Not Supported
00:12:32.363  Directives:                            Not Supported
00:12:32.363  NVMe-MI:                               Not Supported
00:12:32.363  Virtualization Management:             Not Supported
00:12:32.363  Doorbell Buffer Config:                Not Supported
00:12:32.363  Get LBA Status Capability:             Not Supported
00:12:32.363  Command & Feature Lockdown Capability: Not Supported
00:12:32.363  Abort Command Limit:                   4
00:12:32.363  Async Event Request Limit:             4
00:12:32.363  Number of Firmware Slots:              N/A
00:12:32.363  Firmware Slot 1 Read-Only:             N/A
00:12:32.363  Firmware Activation Without Reset:     N/A
00:12:32.363  Multiple Update Detection Support:     N/A
00:12:32.363  Firmware Update Granularity:           No Information Provided
00:12:32.363  Per-Namespace SMART Log:               No
00:12:32.363  Asymmetric Namespace Access Log Page:  Not Supported
00:12:32.363  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode1
00:12:32.363  Command Effects Log Page:              Supported
00:12:32.363  Get Log Page Extended Data:            Supported
00:12:32.363  Telemetry Log Pages:                   Not Supported
00:12:32.363  Persistent Event Log Pages:            Not Supported
00:12:32.363  Supported Log Pages Log Page:          May Support
00:12:32.363  Commands Supported & Effects Log Page: Not Supported
00:12:32.363  Feature Identifiers & Effects Log Page:May Support
00:12:32.363  NVMe-MI Commands & Effects Log Page:   May Support
00:12:32.363  Data Area 4 for Telemetry Log:         Not Supported
00:12:32.363  Error Log Page Entries Supported:      128
00:12:32.363  Keep Alive:                            Supported
00:12:32.363  Keep Alive Granularity:                10000 ms
00:12:32.363  
00:12:32.363  NVM Command Set Attributes
00:12:32.363  ==========================
00:12:32.363  Submission Queue Entry Size
00:12:32.363    Max:                       64
00:12:32.363    Min:                       64
00:12:32.363  Completion Queue Entry Size
00:12:32.363    Max:                       16
00:12:32.363    Min:                       16
00:12:32.363  Number of Namespaces:        32
00:12:32.363  Compare Command:             Supported
00:12:32.363  Write Uncorrectable Command: Not Supported
00:12:32.363  Dataset Management Command:  Supported
00:12:32.363  Write Zeroes Command:        Supported
00:12:32.363  Set Features Save Field:     Not Supported
00:12:32.363  Reservations:                Not Supported
00:12:32.363  Timestamp:                   Not Supported
00:12:32.363  Copy:                        Supported
00:12:32.363  Volatile Write Cache:        Present
00:12:32.363  Atomic Write Unit (Normal):  1
00:12:32.363  Atomic Write Unit (PFail):   1
00:12:32.363  Atomic Compare & Write Unit: 1
00:12:32.363  Fused Compare & Write:       Supported
00:12:32.363  Scatter-Gather List
00:12:32.363    SGL Command Set:           Supported (Dword aligned)
00:12:32.363    SGL Keyed:                 Not Supported
00:12:32.363    SGL Bit Bucket Descriptor: Not Supported
00:12:32.363    SGL Metadata Pointer:      Not Supported
00:12:32.363    Oversized SGL:             Not Supported
00:12:32.363    SGL Metadata Address:      Not Supported
00:12:32.363    SGL Offset:                Not Supported
00:12:32.363    Transport SGL Data Block:  Not Supported
00:12:32.363  Replay Protected Memory Block:  Not Supported
00:12:32.363  
00:12:32.363  Firmware Slot Information
00:12:32.363  =========================
00:12:32.363  Active slot:                 1
00:12:32.363  Slot 1 Firmware Revision:    25.01
00:12:32.363  
00:12:32.363  
00:12:32.363  Commands Supported and Effects
00:12:32.363  ==============================
00:12:32.363  Admin Commands
00:12:32.363  --------------
00:12:32.363                    Get Log Page (02h): Supported 
00:12:32.363                        Identify (06h): Supported 
00:12:32.363                           Abort (08h): Supported 
00:12:32.363                    Set Features (09h): Supported 
00:12:32.363                    Get Features (0Ah): Supported 
00:12:32.363      Asynchronous Event Request (0Ch): Supported 
00:12:32.363                      Keep Alive (18h): Supported 
00:12:32.363  I/O Commands
00:12:32.363  ------------
00:12:32.363                           Flush (00h): Supported LBA-Change 
00:12:32.363                           Write (01h): Supported LBA-Change 
00:12:32.363                            Read (02h): Supported 
00:12:32.363                         Compare (05h): Supported 
00:12:32.363                    Write Zeroes (08h): Supported LBA-Change 
00:12:32.363              Dataset Management (09h): Supported LBA-Change 
00:12:32.364                            Copy (19h): Supported LBA-Change 
00:12:32.364  
00:12:32.364  Error Log
00:12:32.364  =========
00:12:32.364  
00:12:32.364  Arbitration
00:12:32.364  ===========
00:12:32.364  Arbitration Burst:           1
00:12:32.364  
00:12:32.364  Power Management
00:12:32.364  ================
00:12:32.364  Number of Power States:          1
00:12:32.364  Current Power State:             Power State #0
00:12:32.364  Power State #0:
00:12:32.364    Max Power:                      0.00 W
00:12:32.364    Non-Operational State:         Operational
00:12:32.364    Entry Latency:                 Not Reported
00:12:32.364    Exit Latency:                  Not Reported
00:12:32.364    Relative Read Throughput:      0
00:12:32.364    Relative Read Latency:         0
00:12:32.364    Relative Write Throughput:     0
00:12:32.364    Relative Write Latency:        0
00:12:32.364    Idle Power:                     Not Reported
00:12:32.364    Active Power:                   Not Reported
00:12:32.364  Non-Operational Permissive Mode: Not Supported
00:12:32.364  
00:12:32.364  Health Information
00:12:32.364  ==================
00:12:32.364  Critical Warnings:
00:12:32.364    Available Spare Space:     OK
00:12:32.364    Temperature:               OK
00:12:32.364    Device Reliability:        OK
00:12:32.364    Read Only:                 No
00:12:32.364    Volatile Memory Backup:    OK
00:12:32.364  Current Temperature:         0 Kelvin (-273 Celsius)
00:12:32.364  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:12:32.364  Available Spare:             0%
00:12:32.364  Available Sp[2024-12-09 04:03:00.780444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:12:32.364  [2024-12-09 04:03:00.780462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:12:32.364  [2024-12-09 04:03:00.780506] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD
00:12:32.364  [2024-12-09 04:03:00.780525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:32.364  [2024-12-09 04:03:00.780537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:32.364  [2024-12-09 04:03:00.780547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:32.364  [2024-12-09 04:03:00.780557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:32.364  [2024-12-09 04:03:00.784284] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:12:32.364  [2024-12-09 04:03:00.784306] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001
00:12:32.364  [2024-12-09 04:03:00.784973] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:32.364  [2024-12-09 04:03:00.785061] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us
00:12:32.364  [2024-12-09 04:03:00.785075] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms
00:12:32.364  [2024-12-09 04:03:00.785978] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9
00:12:32.364  [2024-12-09 04:03:00.786002] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds
00:12:32.364  [2024-12-09 04:03:00.786056] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl
00:12:32.364  [2024-12-09 04:03:00.788021] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:12:32.364  are Threshold:   0%
00:12:32.364  Life Percentage Used:        0%
00:12:32.364  Data Units Read:             0
00:12:32.364  Data Units Written:          0
00:12:32.364  Host Read Commands:          0
00:12:32.364  Host Write Commands:         0
00:12:32.364  Controller Busy Time:        0 minutes
00:12:32.364  Power Cycles:                0
00:12:32.364  Power On Hours:              0 hours
00:12:32.364  Unsafe Shutdowns:            0
00:12:32.364  Unrecoverable Media Errors:  0
00:12:32.364  Lifetime Error Log Entries:  0
00:12:32.364  Warning Temperature Time:    0 minutes
00:12:32.364  Critical Temperature Time:   0 minutes
00:12:32.364  
00:12:32.364  Number of Queues
00:12:32.364  ================
00:12:32.364  Number of I/O Submission Queues:      127
00:12:32.364  Number of I/O Completion Queues:      127
00:12:32.364  
00:12:32.364  Active Namespaces
00:12:32.364  =================
00:12:32.364  Namespace ID:1
00:12:32.364  Error Recovery Timeout:                Unlimited
00:12:32.364  Command Set Identifier:                NVM (00h)
00:12:32.364  Deallocate:                            Supported
00:12:32.364  Deallocated/Unwritten Error:           Not Supported
00:12:32.364  Deallocated Read Value:                Unknown
00:12:32.364  Deallocate in Write Zeroes:            Not Supported
00:12:32.364  Deallocated Guard Field:               0xFFFF
00:12:32.364  Flush:                                 Supported
00:12:32.364  Reservation:                           Supported
00:12:32.364  Namespace Sharing Capabilities:        Multiple Controllers
00:12:32.364  Size (in LBAs):                        131072 (0GiB)
00:12:32.364  Capacity (in LBAs):                    131072 (0GiB)
00:12:32.364  Utilization (in LBAs):                 131072 (0GiB)
00:12:32.364  NGUID:                                 07D9A539FF234D2C94FF04FF7F2B2437
00:12:32.364  UUID:                                  07d9a539-ff23-4d2c-94ff-04ff7f2b2437
00:12:32.364  Thin Provisioning:                     Not Supported
00:12:32.364  Per-NS Atomic Units:                   Yes
00:12:32.364    Atomic Boundary Size (Normal):       0
00:12:32.364    Atomic Boundary Size (PFail):        0
00:12:32.364    Atomic Boundary Offset:              0
00:12:32.364  Maximum Single Source Range Length:    65535
00:12:32.364  Maximum Copy Length:                   65535
00:12:32.364  Maximum Source Range Count:            1
00:12:32.364  NGUID/EUI64 Never Reused:              No
00:12:32.364  Namespace Write Protected:             No
00:12:32.364  Number of LBA Formats:                 1
00:12:32.364  Current LBA Format:                    LBA Format #00
00:12:32.364  LBA Format #00: Data Size:   512  Metadata Size:     0
00:12:32.364  
00:12:32.364   04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:12:32.622  [2024-12-09 04:03:01.042346] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:37.888  Initializing NVMe Controllers
00:12:37.888  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:12:37.888  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:12:37.888  Initialization complete. Launching workers.
00:12:37.888  ========================================================
00:12:37.888                                                                                                           Latency(us)
00:12:37.888  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:12:37.888  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   29639.85     115.78    4317.78    1249.22   11377.03
00:12:37.888  ========================================================
00:12:37.888  Total                                                                :   29639.85     115.78    4317.78    1249.22   11377.03
00:12:37.888  
00:12:37.888  [2024-12-09 04:03:06.063419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:37.888   04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:12:37.888  [2024-12-09 04:03:06.330724] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:43.152  Initializing NVMe Controllers
00:12:43.152  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:12:43.152  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:12:43.152  Initialization complete. Launching workers.
00:12:43.152  ========================================================
00:12:43.152                                                                                                           Latency(us)
00:12:43.152  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:12:43.152  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   16042.50      62.67    7978.09    6984.49    8117.26
00:12:43.152  ========================================================
00:12:43.152  Total                                                                :   16042.50      62.67    7978.09    6984.49    8117.26
00:12:43.152  
00:12:43.152  [2024-12-09 04:03:11.368374] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:43.152   04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:12:43.153  [2024-12-09 04:03:11.606577] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:48.418  [2024-12-09 04:03:16.720860] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:48.418  Initializing NVMe Controllers
00:12:48.418  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:12:48.418  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:12:48.418  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1
00:12:48.418  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2
00:12:48.418  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3
00:12:48.418  Initialization complete. Launching workers.
00:12:48.418  Starting thread on core 2
00:12:48.418  Starting thread on core 3
00:12:48.418  Starting thread on core 1
00:12:48.418   04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g
00:12:48.675  [2024-12-09 04:03:17.054807] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:51.951  [2024-12-09 04:03:20.117691] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:51.951  Initializing NVMe Controllers
00:12:51.951  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:12:51.951  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:12:51.951  Associating SPDK bdev Controller (SPDK1               ) with lcore 0
00:12:51.951  Associating SPDK bdev Controller (SPDK1               ) with lcore 1
00:12:51.951  Associating SPDK bdev Controller (SPDK1               ) with lcore 2
00:12:51.951  Associating SPDK bdev Controller (SPDK1               ) with lcore 3
00:12:51.951  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration:
00:12:51.951  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:12:51.951  Initialization complete. Launching workers.
00:12:51.951  Starting thread on core 1 with urgent priority queue
00:12:51.951  Starting thread on core 2 with urgent priority queue
00:12:51.951  Starting thread on core 3 with urgent priority queue
00:12:51.951  Starting thread on core 0 with urgent priority queue
00:12:51.951  SPDK bdev Controller (SPDK1               ) core 0:  4473.67 IO/s    22.35 secs/100000 ios
00:12:51.951  SPDK bdev Controller (SPDK1               ) core 1:  5294.00 IO/s    18.89 secs/100000 ios
00:12:51.951  SPDK bdev Controller (SPDK1               ) core 2:  5774.00 IO/s    17.32 secs/100000 ios
00:12:51.951  SPDK bdev Controller (SPDK1               ) core 3:  5778.33 IO/s    17.31 secs/100000 ios
00:12:51.951  ========================================================
00:12:51.951  
00:12:51.951   04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:12:51.951  [2024-12-09 04:03:20.430860] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:51.951  Initializing NVMe Controllers
00:12:51.951  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:12:51.951  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:12:51.951    Namespace ID: 1 size: 0GB
00:12:51.951  Initialization complete.
00:12:51.951  INFO: using host memory buffer for IO
00:12:51.951  Hello world!
00:12:51.951  [2024-12-09 04:03:20.464475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:51.951   04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:12:52.207  [2024-12-09 04:03:20.770161] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:53.580  Initializing NVMe Controllers
00:12:53.580  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:12:53.580  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:12:53.580  Initialization complete. Launching workers.
00:12:53.580  submit (in ns)   avg, min, max =   8524.7,   3516.7, 4016075.6
00:12:53.580  complete (in ns) avg, min, max =  26766.8,   2062.2, 4014678.9
00:12:53.580  
00:12:53.580  Submit histogram
00:12:53.580  ================
00:12:53.580         Range in us     Cumulative     Count
00:12:53.580      3.508 -     3.532:    0.1937%  (       24)
00:12:53.580      3.532 -     3.556:    0.9766%  (       97)
00:12:53.580      3.556 -     3.579:    2.8571%  (      233)
00:12:53.580      3.579 -     3.603:    6.5779%  (      461)
00:12:53.580      3.603 -     3.627:   12.7119%  (      760)
00:12:53.580      3.627 -     3.650:   20.3713%  (      949)
00:12:53.580      3.650 -     3.674:   28.7732%  (     1041)
00:12:53.580      3.674 -     3.698:   36.8200%  (      997)
00:12:53.580      3.698 -     3.721:   44.1566%  (      909)
00:12:53.580      3.721 -     3.745:   49.8951%  (      711)
00:12:53.580      3.745 -     3.769:   54.3745%  (      555)
00:12:53.580      3.769 -     3.793:   58.3293%  (      490)
00:12:53.580      3.793 -     3.816:   61.8160%  (      432)
00:12:53.580      3.816 -     3.840:   65.4237%  (      447)
00:12:53.580      3.840 -     3.864:   69.4108%  (      494)
00:12:53.580      3.864 -     3.887:   73.7369%  (      536)
00:12:53.580      3.887 -     3.911:   77.9742%  (      525)
00:12:53.580      3.911 -     3.935:   81.5981%  (      449)
00:12:53.580      3.935 -     3.959:   84.3180%  (      337)
00:12:53.580      3.959 -     3.982:   86.4891%  (      269)
00:12:53.580      3.982 -     4.006:   88.1598%  (      207)
00:12:53.580      4.006 -     4.030:   89.4108%  (      155)
00:12:53.580      4.030 -     4.053:   90.5327%  (      139)
00:12:53.580      4.053 -     4.077:   91.6303%  (      136)
00:12:53.581      4.077 -     4.101:   92.6796%  (      130)
00:12:53.581      4.101 -     4.124:   93.5432%  (      107)
00:12:53.581      4.124 -     4.148:   94.3584%  (      101)
00:12:53.581      4.148 -     4.172:   94.9475%  (       73)
00:12:53.581      4.172 -     4.196:   95.5367%  (       73)
00:12:53.581      4.196 -     4.219:   95.8918%  (       44)
00:12:53.581      4.219 -     4.243:   96.1985%  (       38)
00:12:53.581      4.243 -     4.267:   96.3519%  (       19)
00:12:53.581      4.267 -     4.290:   96.5295%  (       22)
00:12:53.581      4.290 -     4.314:   96.6263%  (       12)
00:12:53.581      4.314 -     4.338:   96.7554%  (       16)
00:12:53.581      4.338 -     4.361:   96.8604%  (       13)
00:12:53.581      4.361 -     4.385:   96.9572%  (       12)
00:12:53.581      4.385 -     4.409:   97.0621%  (       13)
00:12:53.581      4.409 -     4.433:   97.1186%  (        7)
00:12:53.581      4.433 -     4.456:   97.1994%  (       10)
00:12:53.581      4.456 -     4.480:   97.2236%  (        3)
00:12:53.581      4.480 -     4.504:   97.2397%  (        2)
00:12:53.581      4.504 -     4.527:   97.2639%  (        3)
00:12:53.581      4.527 -     4.551:   97.2962%  (        4)
00:12:53.581      4.551 -     4.575:   97.3043%  (        1)
00:12:53.581      4.622 -     4.646:   97.3285%  (        3)
00:12:53.581      4.646 -     4.670:   97.3366%  (        1)
00:12:53.581      4.693 -     4.717:   97.3527%  (        2)
00:12:53.581      4.717 -     4.741:   97.3608%  (        1)
00:12:53.581      4.741 -     4.764:   97.3769%  (        2)
00:12:53.581      4.764 -     4.788:   97.4092%  (        4)
00:12:53.581      4.788 -     4.812:   97.4334%  (        3)
00:12:53.581      4.812 -     4.836:   97.4496%  (        2)
00:12:53.581      4.836 -     4.859:   97.4899%  (        5)
00:12:53.581      4.859 -     4.883:   97.5061%  (        2)
00:12:53.581      4.883 -     4.907:   97.5545%  (        6)
00:12:53.581      4.907 -     4.930:   97.5787%  (        3)
00:12:53.581      4.930 -     4.954:   97.6190%  (        5)
00:12:53.581      4.954 -     4.978:   97.6352%  (        2)
00:12:53.581      4.978 -     5.001:   97.6755%  (        5)
00:12:53.581      5.001 -     5.025:   97.7240%  (        6)
00:12:53.581      5.025 -     5.049:   97.7401%  (        2)
00:12:53.581      5.049 -     5.073:   97.7643%  (        3)
00:12:53.581      5.073 -     5.096:   97.8047%  (        5)
00:12:53.581      5.096 -     5.120:   97.8289%  (        3)
00:12:53.581      5.120 -     5.144:   97.8531%  (        3)
00:12:53.581      5.144 -     5.167:   97.8692%  (        2)
00:12:53.581      5.167 -     5.191:   97.8854%  (        2)
00:12:53.581      5.191 -     5.215:   97.8935%  (        1)
00:12:53.581      5.215 -     5.239:   97.9338%  (        5)
00:12:53.581      5.239 -     5.262:   97.9500%  (        2)
00:12:53.581      5.262 -     5.286:   97.9742%  (        3)
00:12:53.581      5.310 -     5.333:   98.0065%  (        4)
00:12:53.581      5.333 -     5.357:   98.0145%  (        1)
00:12:53.581      5.357 -     5.381:   98.0307%  (        2)
00:12:53.581      5.381 -     5.404:   98.0630%  (        4)
00:12:53.581      5.404 -     5.428:   98.0710%  (        1)
00:12:53.581      5.452 -     5.476:   98.0872%  (        2)
00:12:53.581      5.476 -     5.499:   98.0952%  (        1)
00:12:53.581      5.523 -     5.547:   98.1033%  (        1)
00:12:53.581      5.760 -     5.784:   98.1114%  (        1)
00:12:53.581      6.021 -     6.044:   98.1195%  (        1)
00:12:53.581      6.044 -     6.068:   98.1275%  (        1)
00:12:53.581      6.210 -     6.258:   98.1437%  (        2)
00:12:53.581      6.258 -     6.305:   98.1517%  (        1)
00:12:53.581      6.921 -     6.969:   98.1598%  (        1)
00:12:53.581      7.159 -     7.206:   98.1679%  (        1)
00:12:53.581      7.206 -     7.253:   98.1840%  (        2)
00:12:53.581      7.253 -     7.301:   98.1921%  (        1)
00:12:53.581      7.301 -     7.348:   98.2002%  (        1)
00:12:53.581      7.348 -     7.396:   98.2163%  (        2)
00:12:53.581      7.396 -     7.443:   98.2244%  (        1)
00:12:53.581      7.443 -     7.490:   98.2324%  (        1)
00:12:53.581      7.538 -     7.585:   98.2486%  (        2)
00:12:53.581      7.585 -     7.633:   98.2567%  (        1)
00:12:53.581      7.633 -     7.680:   98.2647%  (        1)
00:12:53.581      7.680 -     7.727:   98.2728%  (        1)
00:12:53.581      7.727 -     7.775:   98.2809%  (        1)
00:12:53.581      7.775 -     7.822:   98.2889%  (        1)
00:12:53.581      7.870 -     7.917:   98.3051%  (        2)
00:12:53.581      7.917 -     7.964:   98.3132%  (        1)
00:12:53.581      7.964 -     8.012:   98.3212%  (        1)
00:12:53.581      8.012 -     8.059:   98.3535%  (        4)
00:12:53.581      8.107 -     8.154:   98.3616%  (        1)
00:12:53.581      8.154 -     8.201:   98.3858%  (        3)
00:12:53.581      8.296 -     8.344:   98.4019%  (        2)
00:12:53.581      8.344 -     8.391:   98.4100%  (        1)
00:12:53.581      8.391 -     8.439:   98.4342%  (        3)
00:12:53.581      8.533 -     8.581:   98.4423%  (        1)
00:12:53.581      8.581 -     8.628:   98.4504%  (        1)
00:12:53.581      8.723 -     8.770:   98.4665%  (        2)
00:12:53.581      8.865 -     8.913:   98.4746%  (        1)
00:12:53.581      9.292 -     9.339:   98.4907%  (        2)
00:12:53.581     10.003 -    10.050:   98.4988%  (        1)
00:12:53.581     10.050 -    10.098:   98.5069%  (        1)
00:12:53.581     10.098 -    10.145:   98.5149%  (        1)
00:12:53.581     10.145 -    10.193:   98.5230%  (        1)
00:12:53.581     10.477 -    10.524:   98.5311%  (        1)
00:12:53.581     10.572 -    10.619:   98.5391%  (        1)
00:12:53.581     10.619 -    10.667:   98.5472%  (        1)
00:12:53.581     10.714 -    10.761:   98.5553%  (        1)
00:12:53.581     10.904 -    10.951:   98.5634%  (        1)
00:12:53.581     11.093 -    11.141:   98.5714%  (        1)
00:12:53.581     11.710 -    11.757:   98.5795%  (        1)
00:12:53.581     11.852 -    11.899:   98.5876%  (        1)
00:12:53.581     12.136 -    12.231:   98.6037%  (        2)
00:12:53.581     12.326 -    12.421:   98.6118%  (        1)
00:12:53.581     12.421 -    12.516:   98.6199%  (        1)
00:12:53.581     12.610 -    12.705:   98.6279%  (        1)
00:12:53.581     12.705 -    12.800:   98.6360%  (        1)
00:12:53.581     12.990 -    13.084:   98.6441%  (        1)
00:12:53.581     13.369 -    13.464:   98.6521%  (        1)
00:12:53.581     13.843 -    13.938:   98.6602%  (        1)
00:12:53.581     14.033 -    14.127:   98.6764%  (        2)
00:12:53.581     14.412 -    14.507:   98.6844%  (        1)
00:12:53.581     14.696 -    14.791:   98.7006%  (        2)
00:12:53.581     14.791 -    14.886:   98.7086%  (        1)
00:12:53.581     14.886 -    14.981:   98.7167%  (        1)
00:12:53.581     16.308 -    16.403:   98.7248%  (        1)
00:12:53.581     16.972 -    17.067:   98.7328%  (        1)
00:12:53.581     17.161 -    17.256:   98.7409%  (        1)
00:12:53.581     17.256 -    17.351:   98.7571%  (        2)
00:12:53.581     17.351 -    17.446:   98.7893%  (        4)
00:12:53.581     17.541 -    17.636:   98.8378%  (        6)
00:12:53.581     17.636 -    17.730:   98.8701%  (        4)
00:12:53.581     17.730 -    17.825:   98.9104%  (        5)
00:12:53.581     17.825 -    17.920:   98.9750%  (        8)
00:12:53.581     17.920 -    18.015:   99.0476%  (        9)
00:12:53.581     18.015 -    18.110:   99.1283%  (       10)
00:12:53.581     18.110 -    18.204:   99.2010%  (        9)
00:12:53.581     18.204 -    18.299:   99.2413%  (        5)
00:12:53.581     18.299 -    18.394:   99.3220%  (       10)
00:12:53.581     18.394 -    18.489:   99.3462%  (        3)
00:12:53.581     18.489 -    18.584:   99.3785%  (        4)
00:12:53.581     18.584 -    18.679:   99.4431%  (        8)
00:12:53.581     18.679 -    18.773:   99.5077%  (        8)
00:12:53.581     18.773 -    18.868:   99.5722%  (        8)
00:12:53.581     18.868 -    18.963:   99.6126%  (        5)
00:12:53.581     18.963 -    19.058:   99.6368%  (        3)
00:12:53.581     19.058 -    19.153:   99.6610%  (        3)
00:12:53.581     19.153 -    19.247:   99.6772%  (        2)
00:12:53.581     19.247 -    19.342:   99.6852%  (        1)
00:12:53.581     19.342 -    19.437:   99.6933%  (        1)
00:12:53.581     19.437 -    19.532:   99.7175%  (        3)
00:12:53.581     19.532 -    19.627:   99.7417%  (        3)
00:12:53.581     19.627 -    19.721:   99.7821%  (        5)
00:12:53.581     19.721 -    19.816:   99.7982%  (        2)
00:12:53.581     19.816 -    19.911:   99.8063%  (        1)
00:12:53.581     20.006 -    20.101:   99.8224%  (        2)
00:12:53.581     20.290 -    20.385:   99.8305%  (        1)
00:12:53.581     20.859 -    20.954:   99.8386%  (        1)
00:12:53.581     22.945 -    23.040:   99.8467%  (        1)
00:12:53.581     23.893 -    23.988:   99.8547%  (        1)
00:12:53.581     25.790 -    25.979:   99.8628%  (        1)
00:12:53.581     27.876 -    28.065:   99.8789%  (        2)
00:12:53.581     29.393 -    29.582:   99.8870%  (        1)
00:12:53.581   3980.705 -  4004.978:   99.9677%  (       10)
00:12:53.581   4004.978 -  4029.250:  100.0000%  (        4)
00:12:53.581  
00:12:53.581  Complete histogram
00:12:53.581  ==================
00:12:53.581         Range in us     Cumulative     Count
00:12:53.581      2.062 -     2.074:   10.7748%  (     1335)
00:12:53.581      2.074 -     2.086:   45.7546%  (     4334)
00:12:53.581      2.086 -     2.098:   48.1195%  (      293)
00:12:53.581      2.098 -     2.110:   52.7684%  (      576)
00:12:53.581      2.110 -     2.121:   58.2002%  (      673)
00:12:53.581      2.121 -     2.133:   59.4108%  (      150)
00:12:53.581      2.133 -     2.145:   68.1114%  (     1078)
00:12:53.581      2.145 -     2.157:   74.7538%  (      823)
00:12:53.581      2.157 -     2.169:   75.6336%  (      109)
00:12:53.581      2.169 -     2.181:   78.3616%  (      338)
00:12:53.581      2.181 -     2.193:   79.7417%  (      171)
00:12:53.581      2.193 -     2.204:   80.3471%  (       75)
00:12:53.581      2.204 -     2.216:   83.7853%  (      426)
00:12:53.581      2.216 -     2.228:   87.9822%  (      520)
00:12:53.581      2.228 -     2.240:   90.1130%  (      264)
00:12:53.581      2.240 -     2.252:   91.6788%  (      194)
00:12:53.581      2.252 -     2.264:   92.5182%  (      104)
00:12:53.581      2.264 -     2.276:   92.7119%  (       24)
00:12:53.581      2.276 -     2.287:   93.1719%  (       57)
00:12:53.582      2.287 -     2.299:   93.9467%  (       96)
00:12:53.582      2.299 -     2.311:   94.7700%  (      102)
00:12:53.582      2.311 -     2.323:   94.9879%  (       27)
00:12:53.582      2.323 -     2.335:   95.0363%  (        6)
00:12:53.582      2.335 -     2.347:   95.0605%  (        3)
00:12:53.582      2.347 -     2.359:   95.1332%  (        9)
00:12:53.582      2.359 -     2.370:   95.3914%  (       32)
00:12:53.582      2.370 -     2.382:   95.9241%  (       66)
00:12:53.582      2.382 -     2.394:   96.4891%  (       70)
00:12:53.582      2.394 -     2.406:   96.8281%  (       42)
00:12:53.582      2.406 -     2.418:   97.0299%  (       25)
00:12:53.582      2.418 -     2.430:   97.1994%  (       21)
00:12:53.582      2.430 -     2.441:   97.3850%  (       23)
00:12:53.582      2.441 -     2.453:   97.5706%  (       23)
00:12:53.582      2.453 -     2.465:   97.6917%  (       15)
00:12:53.582      2.465 -     2.477:   97.7966%  (       13)
00:12:53.582      2.477 -     2.489:   97.8854%  (       11)
00:12:53.582      2.489 -     2.501:   97.9903%  (       13)
00:12:53.582      2.501 -     2.513:   98.0630%  (        9)
00:12:53.582      2.513 -     2.524:   98.0952%  (        4)
00:12:53.582      2.524 -     2.536:   98.1517%  (        7)
00:12:53.582      2.536 -     2.548:   98.2002%  (        6)
00:12:53.582      2.548 -     2.560:   98.2163%  (        2)
00:12:53.582      2.560 -     2.572:   98.2405%  (        3)
00:12:53.582      2.572 -     2.584:   98.2486%  (        1)
00:12:53.582      2.607 -     2.619:   98.2567%  (        1)
00:12:53.582      2.631 -     2.643:   98.2647%  (        1)
00:12:53.582      2.667 -     2.679:   98.2728%  (        1)
00:12:53.582      2.690 -     2.702:   98.2889%  (        2)
00:12:53.582      2.773 -     2.785:   98.2970%  (        1)
00:12:53.582      2.785 -     2.797:   98.3051%  (        1)
00:12:53.582      2.939 -     2.951:   98.3132%  (        1)
00:12:53.582      3.129 -     3.153:   98.3212%  (        1)
00:12:53.582      3.247 -     3.271:   98.3293%  (        1)
00:12:53.582      3.271 -     3.295:   98.3374%  (        1)
00:12:53.582      3.295 -     3.319:   98.3454%  (        1)
00:12:53.582      3.319 -     3.342:   98.3535%  (        1)
00:12:53.582      3.366 -     3.390:   98.3616%  (        1)
00:12:53.582      3.390 -     3.413:   98.3697%  (        1)
00:12:53.582      3.413 -     3.437:   98.3939%  (        3)
00:12:53.582      3.437 -     3.461:   98.4181%  (        3)
00:12:53.582      3.461 -     3.484:   98.4423%  (        3)
00:12:53.582      3.484 -     3.508:   98.4504%  (        1)
00:12:53.582      3.579 -     3.603:   98.4584%  (        1)
00:12:53.582      3.650 -     3.674:   98.4665%  (        1)
00:12:53.582      3.698 -     3.721:   98.4746%  (        1)
00:12:53.582      3.769 -     3.793:   98.4826%  (        1)
00:12:53.582      3.793 -     3.816:   98.4907%  (        1)
00:12:53.582      3.840 -     3.864:   98.4988%  (        1)
00:12:53.582      3.935 -     3.959:   98.5069%  (        1)
00:12:53.582      4.006 -     4.030:   98.5149%  (        1)
00:12:53.582      4.243 -     4.267:   98.5230%  (        1)
00:12:53.582      5.096 -     5.120:   98.5311%  (        1)
00:12:53.582      5.404 -     5.428:   98.5391%  (        1)
00:12:53.582      5.452 -     5.476:   9[2024-12-09 04:03:21.797447] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:53.582  8.5472%  (        1)
00:12:53.582      5.499 -     5.523:   98.5553%  (        1)
00:12:53.582      5.547 -     5.570:   98.5634%  (        1)
00:12:53.582      5.641 -     5.665:   98.5714%  (        1)
00:12:53.582      5.760 -     5.784:   98.5795%  (        1)
00:12:53.582      5.879 -     5.902:   98.5876%  (        1)
00:12:53.582      5.973 -     5.997:   98.5956%  (        1)
00:12:53.582      6.021 -     6.044:   98.6037%  (        1)
00:12:53.582      6.163 -     6.210:   98.6118%  (        1)
00:12:53.582      6.400 -     6.447:   98.6199%  (        1)
00:12:53.582      6.447 -     6.495:   98.6279%  (        1)
00:12:53.582      6.684 -     6.732:   98.6360%  (        1)
00:12:53.582      6.827 -     6.874:   98.6441%  (        1)
00:12:53.582      7.064 -     7.111:   98.6521%  (        1)
00:12:53.582      7.159 -     7.206:   98.6602%  (        1)
00:12:53.582      7.206 -     7.253:   98.6683%  (        1)
00:12:53.582      7.348 -     7.396:   98.6764%  (        1)
00:12:53.582      7.490 -     7.538:   98.6844%  (        1)
00:12:53.582      7.822 -     7.870:   98.6925%  (        1)
00:12:53.582      7.917 -     7.964:   98.7006%  (        1)
00:12:53.582      8.439 -     8.486:   98.7086%  (        1)
00:12:53.582     11.899 -    11.947:   98.7167%  (        1)
00:12:53.582     15.360 -    15.455:   98.7248%  (        1)
00:12:53.582     15.455 -    15.550:   98.7328%  (        1)
00:12:53.582     15.550 -    15.644:   98.7409%  (        1)
00:12:53.582     15.644 -    15.739:   98.7490%  (        1)
00:12:53.582     15.739 -    15.834:   98.7732%  (        3)
00:12:53.582     15.834 -    15.929:   98.7974%  (        3)
00:12:53.582     15.929 -    16.024:   98.8136%  (        2)
00:12:53.582     16.024 -    16.119:   98.8539%  (        5)
00:12:53.582     16.119 -    16.213:   98.8862%  (        4)
00:12:53.582     16.213 -    16.308:   98.9104%  (        3)
00:12:53.582     16.308 -    16.403:   98.9346%  (        3)
00:12:53.582     16.403 -    16.498:   98.9588%  (        3)
00:12:53.582     16.498 -    16.593:   99.0153%  (        7)
00:12:53.582     16.593 -    16.687:   99.0880%  (        9)
00:12:53.582     16.687 -    16.782:   99.1445%  (        7)
00:12:53.582     16.782 -    16.877:   99.1687%  (        3)
00:12:53.582     16.877 -    16.972:   99.2090%  (        5)
00:12:53.582     16.972 -    17.067:   99.2252%  (        2)
00:12:53.582     17.067 -    17.161:   99.2333%  (        1)
00:12:53.582     17.161 -    17.256:   99.2494%  (        2)
00:12:53.582     17.256 -    17.351:   99.2655%  (        2)
00:12:53.582     17.351 -    17.446:   99.2978%  (        4)
00:12:53.582     17.446 -    17.541:   99.3140%  (        2)
00:12:53.582     17.636 -    17.730:   99.3220%  (        1)
00:12:53.582     17.730 -    17.825:   99.3301%  (        1)
00:12:53.582     17.920 -    18.015:   99.3382%  (        1)
00:12:53.582     18.204 -    18.299:   99.3462%  (        1)
00:12:53.582     18.963 -    19.058:   99.3543%  (        1)
00:12:53.582     20.385 -    20.480:   99.3624%  (        1)
00:12:53.582     25.600 -    25.790:   99.3705%  (        1)
00:12:53.582     40.770 -    40.960:   99.3785%  (        1)
00:12:53.582    163.840 -   164.599:   99.3866%  (        1)
00:12:53.582   3592.344 -  3616.616:   99.3947%  (        1)
00:12:53.582   3980.705 -  4004.978:   99.8305%  (       54)
00:12:53.582   4004.978 -  4029.250:  100.0000%  (       21)
00:12:53.582  
00:12:53.582   04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1
00:12:53.582   04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1
00:12:53.582   04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1
00:12:53.582   04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3
00:12:53.582   04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:12:53.582  [
00:12:53.582    {
00:12:53.582      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:12:53.582      "subtype": "Discovery",
00:12:53.582      "listen_addresses": [],
00:12:53.582      "allow_any_host": true,
00:12:53.582      "hosts": []
00:12:53.582    },
00:12:53.582    {
00:12:53.582      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:12:53.582      "subtype": "NVMe",
00:12:53.582      "listen_addresses": [
00:12:53.582        {
00:12:53.582          "trtype": "VFIOUSER",
00:12:53.582          "adrfam": "IPv4",
00:12:53.582          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:12:53.582          "trsvcid": "0"
00:12:53.582        }
00:12:53.582      ],
00:12:53.582      "allow_any_host": true,
00:12:53.582      "hosts": [],
00:12:53.582      "serial_number": "SPDK1",
00:12:53.582      "model_number": "SPDK bdev Controller",
00:12:53.582      "max_namespaces": 32,
00:12:53.582      "min_cntlid": 1,
00:12:53.582      "max_cntlid": 65519,
00:12:53.582      "namespaces": [
00:12:53.582        {
00:12:53.582          "nsid": 1,
00:12:53.582          "bdev_name": "Malloc1",
00:12:53.582          "name": "Malloc1",
00:12:53.582          "nguid": "07D9A539FF234D2C94FF04FF7F2B2437",
00:12:53.582          "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437"
00:12:53.582        }
00:12:53.582      ]
00:12:53.582    },
00:12:53.582    {
00:12:53.582      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:12:53.582      "subtype": "NVMe",
00:12:53.582      "listen_addresses": [
00:12:53.582        {
00:12:53.582          "trtype": "VFIOUSER",
00:12:53.582          "adrfam": "IPv4",
00:12:53.582          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:12:53.582          "trsvcid": "0"
00:12:53.582        }
00:12:53.582      ],
00:12:53.582      "allow_any_host": true,
00:12:53.582      "hosts": [],
00:12:53.582      "serial_number": "SPDK2",
00:12:53.582      "model_number": "SPDK bdev Controller",
00:12:53.582      "max_namespaces": 32,
00:12:53.582      "min_cntlid": 1,
00:12:53.582      "max_cntlid": 65519,
00:12:53.582      "namespaces": [
00:12:53.582        {
00:12:53.582          "nsid": 1,
00:12:53.582          "bdev_name": "Malloc2",
00:12:53.582          "name": "Malloc2",
00:12:53.582          "nguid": "1F14A502DA0A41F2920C11B007901159",
00:12:53.582          "uuid": "1f14a502-da0a-41f2-920c-11b007901159"
00:12:53.582        }
00:12:53.582      ]
00:12:53.582    }
00:12:53.582  ]
00:12:53.582   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:12:53.582   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=207599
00:12:53.583   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user1/1 		subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file
00:12:53.583   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:12:53.583   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0
00:12:53.583   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:12:53.583   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:12:53.583   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1
00:12:53.583   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:12:53.840  [2024-12-09 04:03:22.298771] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:12:53.840   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3
00:12:54.098  Malloc3
00:12:54.098   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2
00:12:54.663  [2024-12-09 04:03:22.954932] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:12:54.663   04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:12:54.663  Asynchronous Event Request test
00:12:54.663  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:12:54.663  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:12:54.663  Registering asynchronous event callbacks...
00:12:54.663  Starting namespace attribute notice tests for all controllers...
00:12:54.663  /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:12:54.663  aer_cb - Changed Namespace
00:12:54.663  Cleaning up...
00:12:54.663  [
00:12:54.663    {
00:12:54.663      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:12:54.663      "subtype": "Discovery",
00:12:54.663      "listen_addresses": [],
00:12:54.663      "allow_any_host": true,
00:12:54.663      "hosts": []
00:12:54.663    },
00:12:54.663    {
00:12:54.663      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:12:54.664      "subtype": "NVMe",
00:12:54.664      "listen_addresses": [
00:12:54.664        {
00:12:54.664          "trtype": "VFIOUSER",
00:12:54.664          "adrfam": "IPv4",
00:12:54.664          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:12:54.664          "trsvcid": "0"
00:12:54.664        }
00:12:54.664      ],
00:12:54.664      "allow_any_host": true,
00:12:54.664      "hosts": [],
00:12:54.664      "serial_number": "SPDK1",
00:12:54.664      "model_number": "SPDK bdev Controller",
00:12:54.664      "max_namespaces": 32,
00:12:54.664      "min_cntlid": 1,
00:12:54.664      "max_cntlid": 65519,
00:12:54.664      "namespaces": [
00:12:54.664        {
00:12:54.664          "nsid": 1,
00:12:54.664          "bdev_name": "Malloc1",
00:12:54.664          "name": "Malloc1",
00:12:54.664          "nguid": "07D9A539FF234D2C94FF04FF7F2B2437",
00:12:54.664          "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437"
00:12:54.664        },
00:12:54.664        {
00:12:54.664          "nsid": 2,
00:12:54.664          "bdev_name": "Malloc3",
00:12:54.664          "name": "Malloc3",
00:12:54.664          "nguid": "FE3543652DAC4D0FB8FA008A85669FA7",
00:12:54.664          "uuid": "fe354365-2dac-4d0f-b8fa-008a85669fa7"
00:12:54.664        }
00:12:54.664      ]
00:12:54.664    },
00:12:54.664    {
00:12:54.664      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:12:54.664      "subtype": "NVMe",
00:12:54.664      "listen_addresses": [
00:12:54.664        {
00:12:54.664          "trtype": "VFIOUSER",
00:12:54.664          "adrfam": "IPv4",
00:12:54.664          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:12:54.664          "trsvcid": "0"
00:12:54.664        }
00:12:54.664      ],
00:12:54.664      "allow_any_host": true,
00:12:54.664      "hosts": [],
00:12:54.664      "serial_number": "SPDK2",
00:12:54.664      "model_number": "SPDK bdev Controller",
00:12:54.664      "max_namespaces": 32,
00:12:54.664      "min_cntlid": 1,
00:12:54.664      "max_cntlid": 65519,
00:12:54.664      "namespaces": [
00:12:54.664        {
00:12:54.664          "nsid": 1,
00:12:54.664          "bdev_name": "Malloc2",
00:12:54.664          "name": "Malloc2",
00:12:54.664          "nguid": "1F14A502DA0A41F2920C11B007901159",
00:12:54.664          "uuid": "1f14a502-da0a-41f2-920c-11b007901159"
00:12:54.664        }
00:12:54.664      ]
00:12:54.664    }
00:12:54.664  ]
00:12:54.923   04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 207599
00:12:54.923   04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:12:54.923   04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2
00:12:54.923   04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2
00:12:54.923   04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci
00:12:54.923  [2024-12-09 04:03:23.263765] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:12:54.923  [2024-12-09 04:03:23.263803] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid207739 ]
00:12:54.923  [2024-12-09 04:03:23.312100] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2
00:12:54.923  [2024-12-09 04:03:23.320556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:12:54.923  [2024-12-09 04:03:23.320605] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3af1c4f000
00:12:54.923  [2024-12-09 04:03:23.321558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.322562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.323568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.324594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.325596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.326607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.327607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.328602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:54.923  [2024-12-09 04:03:23.329618] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:12:54.924  [2024-12-09 04:03:23.329655] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3af1c44000
00:12:54.924  [2024-12-09 04:03:23.330772] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:12:54.924  [2024-12-09 04:03:23.345909] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully
00:12:54.924  [2024-12-09 04:03:23.345946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout)
00:12:54.924  [2024-12-09 04:03:23.351061] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:12:54.924  [2024-12-09 04:03:23.351114] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:12:54.924  [2024-12-09 04:03:23.351203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout)
00:12:54.924  [2024-12-09 04:03:23.351225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout)
00:12:54.924  [2024-12-09 04:03:23.351236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout)
00:12:54.924  [2024-12-09 04:03:23.352066] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300
00:12:54.924  [2024-12-09 04:03:23.352090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout)
00:12:54.924  [2024-12-09 04:03:23.352105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout)
00:12:54.924  [2024-12-09 04:03:23.353069] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:12:54.924  [2024-12-09 04:03:23.353090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout)
00:12:54.924  [2024-12-09 04:03:23.353104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms)
00:12:54.924  [2024-12-09 04:03:23.354076] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0
00:12:54.924  [2024-12-09 04:03:23.354096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:12:54.924  [2024-12-09 04:03:23.355084] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0
00:12:54.924  [2024-12-09 04:03:23.355104] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0
00:12:54.924  [2024-12-09 04:03:23.355113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms)
00:12:54.924  [2024-12-09 04:03:23.355125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:12:54.924  [2024-12-09 04:03:23.355238] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1
00:12:54.924  [2024-12-09 04:03:23.355247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:12:54.924  [2024-12-09 04:03:23.355277] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000
00:12:54.924  [2024-12-09 04:03:23.356095] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000
00:12:54.924  [2024-12-09 04:03:23.357096] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff
00:12:54.924  [2024-12-09 04:03:23.358104] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:12:54.924  [2024-12-09 04:03:23.359102] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:12:54.924  [2024-12-09 04:03:23.359181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:12:54.924  [2024-12-09 04:03:23.360116] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1
00:12:54.924  [2024-12-09 04:03:23.360136] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:12:54.924  [2024-12-09 04:03:23.360146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.360172] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout)
00:12:54.924  [2024-12-09 04:03:23.360185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.360208] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:12:54.924  [2024-12-09 04:03:23.360218] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:54.924  [2024-12-09 04:03:23.360224] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:54.924  [2024-12-09 04:03:23.360240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:54.924  [2024-12-09 04:03:23.368303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:12:54.924  [2024-12-09 04:03:23.368341] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072
00:12:54.924  [2024-12-09 04:03:23.368351] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072
00:12:54.924  [2024-12-09 04:03:23.368359] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001
00:12:54.924  [2024-12-09 04:03:23.368367] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:12:54.924  [2024-12-09 04:03:23.368374] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1
00:12:54.924  [2024-12-09 04:03:23.368382] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1
00:12:54.924  [2024-12-09 04:03:23.368390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.368407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.368423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:12:54.924  [2024-12-09 04:03:23.376300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:12:54.924  [2024-12-09 04:03:23.376324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:54.924  [2024-12-09 04:03:23.376338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:54.924  [2024-12-09 04:03:23.376351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:54.924  [2024-12-09 04:03:23.376363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:12:54.924  [2024-12-09 04:03:23.376372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.376389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.376404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:12:54.924  [2024-12-09 04:03:23.384384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:12:54.924  [2024-12-09 04:03:23.384403] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms
00:12:54.924  [2024-12-09 04:03:23.384412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.384424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.384434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.384449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:12:54.924  [2024-12-09 04:03:23.392297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:12:54.924  [2024-12-09 04:03:23.392374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.392392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.392406] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:12:54.924  [2024-12-09 04:03:23.392415] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:12:54.924  [2024-12-09 04:03:23.392422] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:54.924  [2024-12-09 04:03:23.392432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:12:54.924  [2024-12-09 04:03:23.400282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:12:54.924  [2024-12-09 04:03:23.400305] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added
00:12:54.924  [2024-12-09 04:03:23.400334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.400350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms)
00:12:54.924  [2024-12-09 04:03:23.400364] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:12:54.924  [2024-12-09 04:03:23.400373] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:54.924  [2024-12-09 04:03:23.400379] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:54.924  [2024-12-09 04:03:23.400389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:54.924  [2024-12-09 04:03:23.408282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.408320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.408338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.408353] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:12:54.925  [2024-12-09 04:03:23.408362] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:54.925  [2024-12-09 04:03:23.408368] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:54.925  [2024-12-09 04:03:23.408378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.416294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.416316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.416330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.416344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.416358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.416367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.416376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.416384] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID
00:12:54.925  [2024-12-09 04:03:23.416392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms)
00:12:54.925  [2024-12-09 04:03:23.416401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout)
00:12:54.925  [2024-12-09 04:03:23.416425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.424284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.424316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.432299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.432325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.440285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.440309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.447319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.447352] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:12:54.925  [2024-12-09 04:03:23.447364] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:12:54.925  [2024-12-09 04:03:23.447371] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:12:54.925  [2024-12-09 04:03:23.447377] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:12:54.925  [2024-12-09 04:03:23.447383] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2
00:12:54.925  [2024-12-09 04:03:23.447393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:12:54.925  [2024-12-09 04:03:23.447407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:12:54.925  [2024-12-09 04:03:23.447416] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:12:54.925  [2024-12-09 04:03:23.447422] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:54.925  [2024-12-09 04:03:23.447431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.447443] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:12:54.925  [2024-12-09 04:03:23.447452] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:12:54.925  [2024-12-09 04:03:23.447458] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:54.925  [2024-12-09 04:03:23.447466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.447479] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:12:54.925  [2024-12-09 04:03:23.447488] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:12:54.925  [2024-12-09 04:03:23.447494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:12:54.925  [2024-12-09 04:03:23.447503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:12:54.925  [2024-12-09 04:03:23.456300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.456328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:12:54.925  [2024-12-09 04:03:23.456360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:12:54.925  =====================================================
00:12:54.925  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:12:54.925  =====================================================
00:12:54.925  Controller Capabilities/Features
00:12:54.925  ================================
00:12:54.925  Vendor ID:                             4e58
00:12:54.925  Subsystem Vendor ID:                   4e58
00:12:54.925  Serial Number:                         SPDK2
00:12:54.925  Model Number:                          SPDK bdev Controller
00:12:54.925  Firmware Version:                      25.01
00:12:54.925  Recommended Arb Burst:                 6
00:12:54.925  IEEE OUI Identifier:                   8d 6b 50
00:12:54.925  Multi-path I/O
00:12:54.925    May have multiple subsystem ports:   Yes
00:12:54.925    May have multiple controllers:       Yes
00:12:54.925    Associated with SR-IOV VF:           No
00:12:54.925  Max Data Transfer Size:                131072
00:12:54.925  Max Number of Namespaces:              32
00:12:54.925  Max Number of I/O Queues:              127
00:12:54.925  NVMe Specification Version (VS):       1.3
00:12:54.925  NVMe Specification Version (Identify): 1.3
00:12:54.925  Maximum Queue Entries:                 256
00:12:54.925  Contiguous Queues Required:            Yes
00:12:54.925  Arbitration Mechanisms Supported
00:12:54.925    Weighted Round Robin:                Not Supported
00:12:54.925    Vendor Specific:                     Not Supported
00:12:54.925  Reset Timeout:                         15000 ms
00:12:54.925  Doorbell Stride:                       4 bytes
00:12:54.925  NVM Subsystem Reset:                   Not Supported
00:12:54.925  Command Sets Supported
00:12:54.925    NVM Command Set:                     Supported
00:12:54.925  Boot Partition:                        Not Supported
00:12:54.925  Memory Page Size Minimum:              4096 bytes
00:12:54.925  Memory Page Size Maximum:              4096 bytes
00:12:54.925  Persistent Memory Region:              Not Supported
00:12:54.925  Optional Asynchronous Events Supported
00:12:54.925    Namespace Attribute Notices:         Supported
00:12:54.925    Firmware Activation Notices:         Not Supported
00:12:54.925    ANA Change Notices:                  Not Supported
00:12:54.925    PLE Aggregate Log Change Notices:    Not Supported
00:12:54.925    LBA Status Info Alert Notices:       Not Supported
00:12:54.925    EGE Aggregate Log Change Notices:    Not Supported
00:12:54.925    Normal NVM Subsystem Shutdown event: Not Supported
00:12:54.925    Zone Descriptor Change Notices:      Not Supported
00:12:54.925    Discovery Log Change Notices:        Not Supported
00:12:54.925  Controller Attributes
00:12:54.925    128-bit Host Identifier:             Supported
00:12:54.925    Non-Operational Permissive Mode:     Not Supported
00:12:54.925    NVM Sets:                            Not Supported
00:12:54.925    Read Recovery Levels:                Not Supported
00:12:54.925    Endurance Groups:                    Not Supported
00:12:54.925    Predictable Latency Mode:            Not Supported
00:12:54.925    Traffic Based Keep ALive:            Not Supported
00:12:54.925    Namespace Granularity:               Not Supported
00:12:54.925    SQ Associations:                     Not Supported
00:12:54.925    UUID List:                           Not Supported
00:12:54.925    Multi-Domain Subsystem:              Not Supported
00:12:54.925    Fixed Capacity Management:           Not Supported
00:12:54.925    Variable Capacity Management:        Not Supported
00:12:54.925    Delete Endurance Group:              Not Supported
00:12:54.925    Delete NVM Set:                      Not Supported
00:12:54.925    Extended LBA Formats Supported:      Not Supported
00:12:54.925    Flexible Data Placement Supported:   Not Supported
00:12:54.925  
00:12:54.925  Controller Memory Buffer Support
00:12:54.925  ================================
00:12:54.925  Supported:                             No
00:12:54.925  
00:12:54.925  Persistent Memory Region Support
00:12:54.925  ================================
00:12:54.925  Supported:                             No
00:12:54.925  
00:12:54.925  Admin Command Set Attributes
00:12:54.925  ============================
00:12:54.925  Security Send/Receive:                 Not Supported
00:12:54.925  Format NVM:                            Not Supported
00:12:54.925  Firmware Activate/Download:            Not Supported
00:12:54.925  Namespace Management:                  Not Supported
00:12:54.925  Device Self-Test:                      Not Supported
00:12:54.925  Directives:                            Not Supported
00:12:54.925  NVMe-MI:                               Not Supported
00:12:54.925  Virtualization Management:             Not Supported
00:12:54.925  Doorbell Buffer Config:                Not Supported
00:12:54.925  Get LBA Status Capability:             Not Supported
00:12:54.925  Command & Feature Lockdown Capability: Not Supported
00:12:54.926  Abort Command Limit:                   4
00:12:54.926  Async Event Request Limit:             4
00:12:54.926  Number of Firmware Slots:              N/A
00:12:54.926  Firmware Slot 1 Read-Only:             N/A
00:12:54.926  Firmware Activation Without Reset:     N/A
00:12:54.926  Multiple Update Detection Support:     N/A
00:12:54.926  Firmware Update Granularity:           No Information Provided
00:12:54.926  Per-Namespace SMART Log:               No
00:12:54.926  Asymmetric Namespace Access Log Page:  Not Supported
00:12:54.926  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode2
00:12:54.926  Command Effects Log Page:              Supported
00:12:54.926  Get Log Page Extended Data:            Supported
00:12:54.926  Telemetry Log Pages:                   Not Supported
00:12:54.926  Persistent Event Log Pages:            Not Supported
00:12:54.926  Supported Log Pages Log Page:          May Support
00:12:54.926  Commands Supported & Effects Log Page: Not Supported
00:12:54.926  Feature Identifiers & Effects Log Page:May Support
00:12:54.926  NVMe-MI Commands & Effects Log Page:   May Support
00:12:54.926  Data Area 4 for Telemetry Log:         Not Supported
00:12:54.926  Error Log Page Entries Supported:      128
00:12:54.926  Keep Alive:                            Supported
00:12:54.926  Keep Alive Granularity:                10000 ms
00:12:54.926  
00:12:54.926  NVM Command Set Attributes
00:12:54.926  ==========================
00:12:54.926  Submission Queue Entry Size
00:12:54.926    Max:                       64
00:12:54.926    Min:                       64
00:12:54.926  Completion Queue Entry Size
00:12:54.926    Max:                       16
00:12:54.926    Min:                       16
00:12:54.926  Number of Namespaces:        32
00:12:54.926  Compare Command:             Supported
00:12:54.926  Write Uncorrectable Command: Not Supported
00:12:54.926  Dataset Management Command:  Supported
00:12:54.926  Write Zeroes Command:        Supported
00:12:54.926  Set Features Save Field:     Not Supported
00:12:54.926  Reservations:                Not Supported
00:12:54.926  Timestamp:                   Not Supported
00:12:54.926  Copy:                        Supported
00:12:54.926  Volatile Write Cache:        Present
00:12:54.926  Atomic Write Unit (Normal):  1
00:12:54.926  Atomic Write Unit (PFail):   1
00:12:54.926  Atomic Compare & Write Unit: 1
00:12:54.926  Fused Compare & Write:       Supported
00:12:54.926  Scatter-Gather List
00:12:54.926    SGL Command Set:           Supported (Dword aligned)
00:12:54.926    SGL Keyed:                 Not Supported
00:12:54.926    SGL Bit Bucket Descriptor: Not Supported
00:12:54.926    SGL Metadata Pointer:      Not Supported
00:12:54.926    Oversized SGL:             Not Supported
00:12:54.926    SGL Metadata Address:      Not Supported
00:12:54.926    SGL Offset:                Not Supported
00:12:54.926    Transport SGL Data Block:  Not Supported
00:12:54.926  Replay Protected Memory Block:  Not Supported
00:12:54.926  
00:12:54.926  Firmware Slot Information
00:12:54.926  =========================
00:12:54.926  Active slot:                 1
00:12:54.926  Slot 1 Firmware Revision:    25.01
00:12:54.926  
00:12:54.926  
00:12:54.926  Commands Supported and Effects
00:12:54.926  ==============================
00:12:54.926  Admin Commands
00:12:54.926  --------------
00:12:54.926                    Get Log Page (02h): Supported 
00:12:54.926                        Identify (06h): Supported 
00:12:54.926                           Abort (08h): Supported 
00:12:54.926                    Set Features (09h): Supported 
00:12:54.926                    Get Features (0Ah): Supported 
00:12:54.926      Asynchronous Event Request (0Ch): Supported 
00:12:54.926                      Keep Alive (18h): Supported 
00:12:54.926  I/O Commands
00:12:54.926  ------------
00:12:54.926                           Flush (00h): Supported LBA-Change 
00:12:54.926                           Write (01h): Supported LBA-Change 
00:12:54.926                            Read (02h): Supported 
00:12:54.926                         Compare (05h): Supported 
00:12:54.926                    Write Zeroes (08h): Supported LBA-Change 
00:12:54.926              Dataset Management (09h): Supported LBA-Change 
00:12:54.926                            Copy (19h): Supported LBA-Change 
00:12:54.926  
00:12:54.926  Error Log
00:12:54.926  =========
00:12:54.926  
00:12:54.926  Arbitration
00:12:54.926  ===========
00:12:54.926  Arbitration Burst:           1
00:12:54.926  
00:12:54.926  Power Management
00:12:54.926  ================
00:12:54.926  Number of Power States:          1
00:12:54.926  Current Power State:             Power State #0
00:12:54.926  Power State #0:
00:12:54.926    Max Power:                      0.00 W
00:12:54.926    Non-Operational State:         Operational
00:12:54.926    Entry Latency:                 Not Reported
00:12:54.926    Exit Latency:                  Not Reported
00:12:54.926    Relative Read Throughput:      0
00:12:54.926    Relative Read Latency:         0
00:12:54.926    Relative Write Throughput:     0
00:12:54.926    Relative Write Latency:        0
00:12:54.926    Idle Power:                     Not Reported
00:12:54.926    Active Power:                   Not Reported
00:12:54.926  Non-Operational Permissive Mode: Not Supported
00:12:54.926  
00:12:54.926  Health Information
00:12:54.926  ==================
00:12:54.926  Critical Warnings:
00:12:54.926    Available Spare Space:     OK
00:12:54.926    Temperature:               OK
00:12:54.926    Device Reliability:        OK
00:12:54.926    Read Only:                 No
00:12:54.926    Volatile Memory Backup:    OK
00:12:54.926  Current Temperature:         0 Kelvin (-273 Celsius)
00:12:54.926  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:12:54.926  Available Spare:             0%
00:12:54.926  Available Sp[2024-12-09 04:03:23.456478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:12:54.926  [2024-12-09 04:03:23.464299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:12:54.926  [2024-12-09 04:03:23.464357] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD
00:12:54.926  [2024-12-09 04:03:23.464375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:54.926  [2024-12-09 04:03:23.464387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:54.926  [2024-12-09 04:03:23.464397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:54.926  [2024-12-09 04:03:23.464407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:54.926  [2024-12-09 04:03:23.464493] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:12:54.926  [2024-12-09 04:03:23.464514] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001
00:12:54.926  [2024-12-09 04:03:23.465494] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:12:54.926  [2024-12-09 04:03:23.465606] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us
00:12:54.926  [2024-12-09 04:03:23.465621] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms
00:12:54.926  [2024-12-09 04:03:23.466502] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9
00:12:54.926  [2024-12-09 04:03:23.466526] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds
00:12:54.926  [2024-12-09 04:03:23.466587] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl
00:12:54.926  [2024-12-09 04:03:23.467810] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:12:55.184  are Threshold:   0%
00:12:55.184  Life Percentage Used:        0%
00:12:55.184  Data Units Read:             0
00:12:55.184  Data Units Written:          0
00:12:55.184  Host Read Commands:          0
00:12:55.184  Host Write Commands:         0
00:12:55.184  Controller Busy Time:        0 minutes
00:12:55.184  Power Cycles:                0
00:12:55.184  Power On Hours:              0 hours
00:12:55.184  Unsafe Shutdowns:            0
00:12:55.184  Unrecoverable Media Errors:  0
00:12:55.184  Lifetime Error Log Entries:  0
00:12:55.184  Warning Temperature Time:    0 minutes
00:12:55.184  Critical Temperature Time:   0 minutes
00:12:55.184  
00:12:55.184  Number of Queues
00:12:55.184  ================
00:12:55.184  Number of I/O Submission Queues:      127
00:12:55.184  Number of I/O Completion Queues:      127
00:12:55.184  
00:12:55.184  Active Namespaces
00:12:55.184  =================
00:12:55.184  Namespace ID:1
00:12:55.184  Error Recovery Timeout:                Unlimited
00:12:55.184  Command Set Identifier:                NVM (00h)
00:12:55.184  Deallocate:                            Supported
00:12:55.184  Deallocated/Unwritten Error:           Not Supported
00:12:55.184  Deallocated Read Value:                Unknown
00:12:55.184  Deallocate in Write Zeroes:            Not Supported
00:12:55.184  Deallocated Guard Field:               0xFFFF
00:12:55.184  Flush:                                 Supported
00:12:55.184  Reservation:                           Supported
00:12:55.184  Namespace Sharing Capabilities:        Multiple Controllers
00:12:55.184  Size (in LBAs):                        131072 (0GiB)
00:12:55.184  Capacity (in LBAs):                    131072 (0GiB)
00:12:55.184  Utilization (in LBAs):                 131072 (0GiB)
00:12:55.184  NGUID:                                 1F14A502DA0A41F2920C11B007901159
00:12:55.184  UUID:                                  1f14a502-da0a-41f2-920c-11b007901159
00:12:55.184  Thin Provisioning:                     Not Supported
00:12:55.184  Per-NS Atomic Units:                   Yes
00:12:55.184    Atomic Boundary Size (Normal):       0
00:12:55.184    Atomic Boundary Size (PFail):        0
00:12:55.184    Atomic Boundary Offset:              0
00:12:55.184  Maximum Single Source Range Length:    65535
00:12:55.184  Maximum Copy Length:                   65535
00:12:55.184  Maximum Source Range Count:            1
00:12:55.184  NGUID/EUI64 Never Reused:              No
00:12:55.184  Namespace Write Protected:             No
00:12:55.184  Number of LBA Formats:                 1
00:12:55.184  Current LBA Format:                    LBA Format #00
00:12:55.184  LBA Format #00: Data Size:   512  Metadata Size:     0
00:12:55.184  
00:12:55.185   04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:12:55.185  [2024-12-09 04:03:23.713166] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:13:00.446  Initializing NVMe Controllers
00:13:00.446  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:00.446  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:13:00.446  Initialization complete. Launching workers.
00:13:00.446  ========================================================
00:13:00.446                                                                                                           Latency(us)
00:13:00.446  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:13:00.446  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   31787.39     124.17    4028.37    1187.63   10330.95
00:13:00.446  ========================================================
00:13:00.446  Total                                                                :   31787.39     124.17    4028.37    1187.63   10330.95
00:13:00.446  
00:13:00.446  [2024-12-09 04:03:28.820645] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:13:00.446   04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:13:00.703  [2024-12-09 04:03:29.074364] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:13:05.974  Initializing NVMe Controllers
00:13:05.974  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:05.974  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:13:05.974  Initialization complete. Launching workers.
00:13:05.974  ========================================================
00:13:05.974                                                                                                           Latency(us)
00:13:05.974  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:13:05.974  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   30242.95     118.14    4231.83    1226.34    7621.55
00:13:05.974  ========================================================
00:13:05.974  Total                                                                :   30242.95     118.14    4231.83    1226.34    7621.55
00:13:05.975  
00:13:05.975  [2024-12-09 04:03:34.092914] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:13:05.975   04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:13:05.975  [2024-12-09 04:03:34.328189] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:13:11.233  [2024-12-09 04:03:39.455450] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:13:11.233  Initializing NVMe Controllers
00:13:11.233  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:11.233  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:11.233  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1
00:13:11.233  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2
00:13:11.233  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3
00:13:11.233  Initialization complete. Launching workers.
00:13:11.233  Starting thread on core 2
00:13:11.233  Starting thread on core 3
00:13:11.233  Starting thread on core 1
00:13:11.233   04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g
00:13:11.233  [2024-12-09 04:03:39.781772] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:13:14.512  [2024-12-09 04:03:42.952580] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:13:14.512  Initializing NVMe Controllers
00:13:14.512  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:14.512  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:14.512  Associating SPDK bdev Controller (SPDK2               ) with lcore 0
00:13:14.512  Associating SPDK bdev Controller (SPDK2               ) with lcore 1
00:13:14.512  Associating SPDK bdev Controller (SPDK2               ) with lcore 2
00:13:14.512  Associating SPDK bdev Controller (SPDK2               ) with lcore 3
00:13:14.512  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration:
00:13:14.512  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:13:14.512  Initialization complete. Launching workers.
00:13:14.512  Starting thread on core 1 with urgent priority queue
00:13:14.512  Starting thread on core 2 with urgent priority queue
00:13:14.512  Starting thread on core 3 with urgent priority queue
00:13:14.512  Starting thread on core 0 with urgent priority queue
00:13:14.512  SPDK bdev Controller (SPDK2               ) core 0:  2197.33 IO/s    45.51 secs/100000 ios
00:13:14.512  SPDK bdev Controller (SPDK2               ) core 1:  3359.33 IO/s    29.77 secs/100000 ios
00:13:14.512  SPDK bdev Controller (SPDK2               ) core 2:  2726.00 IO/s    36.68 secs/100000 ios
00:13:14.512  SPDK bdev Controller (SPDK2               ) core 3:  3641.33 IO/s    27.46 secs/100000 ios
00:13:14.512  ========================================================
00:13:14.512  
00:13:14.512   04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:13:14.770  [2024-12-09 04:03:43.266806] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:13:14.770  Initializing NVMe Controllers
00:13:14.770  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:14.770  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:14.770    Namespace ID: 1 size: 0GB
00:13:14.770  Initialization complete.
00:13:14.770  INFO: using host memory buffer for IO
00:13:14.770  Hello world!
00:13:14.770  [2024-12-09 04:03:43.275863] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:13:14.770   04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:13:15.028  [2024-12-09 04:03:43.585022] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:13:16.401  Initializing NVMe Controllers
00:13:16.401  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:16.401  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:16.401  Initialization complete. Launching workers.
00:13:16.401  submit (in ns)   avg, min, max =   9011.1,   3521.1, 4019736.7
00:13:16.401  complete (in ns) avg, min, max =  28038.1,   2060.0, 5011870.0
00:13:16.401  
00:13:16.401  Submit histogram
00:13:16.401  ================
00:13:16.401         Range in us     Cumulative     Count
00:13:16.401      3.508 -     3.532:    0.0237%  (        3)
00:13:16.401      3.532 -     3.556:    0.4499%  (       54)
00:13:16.401      3.556 -     3.579:    1.3260%  (      111)
00:13:16.401      3.579 -     3.603:    3.9621%  (      334)
00:13:16.401      3.603 -     3.627:    7.8927%  (      498)
00:13:16.401      3.627 -     3.650:   16.4562%  (     1085)
00:13:16.401      3.650 -     3.674:   25.5249%  (     1149)
00:13:16.401      3.674 -     3.698:   34.7593%  (     1170)
00:13:16.401      3.698 -     3.721:   42.2968%  (      955)
00:13:16.401      3.721 -     3.745:   49.8106%  (      952)
00:13:16.401      3.745 -     3.769:   55.9669%  (      780)
00:13:16.401      3.769 -     3.793:   61.7206%  (      729)
00:13:16.401      3.793 -     3.816:   65.7695%  (      513)
00:13:16.401      3.816 -     3.840:   69.0371%  (      414)
00:13:16.401      3.840 -     3.864:   72.7388%  (      469)
00:13:16.401      3.864 -     3.887:   76.2747%  (      448)
00:13:16.401      3.887 -     3.911:   80.3788%  (      520)
00:13:16.401      3.911 -     3.935:   83.6543%  (      415)
00:13:16.401      3.935 -     3.959:   86.0379%  (      302)
00:13:16.401      3.959 -     3.982:   88.2873%  (      285)
00:13:16.401      3.982 -     4.006:   90.1026%  (      230)
00:13:16.401      4.006 -     4.030:   91.5470%  (      183)
00:13:16.401      4.030 -     4.053:   93.0071%  (      185)
00:13:16.401      4.053 -     4.077:   94.1673%  (      147)
00:13:16.401      4.077 -     4.101:   94.9487%  (       99)
00:13:16.401      4.101 -     4.124:   95.5801%  (       80)
00:13:16.401      4.124 -     4.148:   95.9984%  (       53)
00:13:16.401      4.148 -     4.172:   96.2431%  (       31)
00:13:16.401      4.172 -     4.196:   96.4483%  (       26)
00:13:16.401      4.196 -     4.219:   96.6062%  (       20)
00:13:16.401      4.219 -     4.243:   96.7324%  (       16)
00:13:16.401      4.243 -     4.267:   96.8508%  (       15)
00:13:16.401      4.267 -     4.290:   96.9061%  (        7)
00:13:16.401      4.290 -     4.314:   97.0166%  (       14)
00:13:16.401      4.314 -     4.338:   97.1034%  (       11)
00:13:16.401      4.338 -     4.361:   97.1823%  (       10)
00:13:16.401      4.361 -     4.385:   97.2455%  (        8)
00:13:16.401      4.385 -     4.409:   97.3323%  (       11)
00:13:16.401      4.409 -     4.433:   97.3402%  (        1)
00:13:16.401      4.433 -     4.456:   97.3560%  (        2)
00:13:16.401      4.456 -     4.480:   97.3717%  (        2)
00:13:16.401      4.480 -     4.504:   97.3954%  (        3)
00:13:16.401      4.504 -     4.527:   97.4112%  (        2)
00:13:16.401      4.527 -     4.551:   97.4270%  (        2)
00:13:16.401      4.551 -     4.575:   97.4349%  (        1)
00:13:16.401      4.575 -     4.599:   97.4507%  (        2)
00:13:16.401      4.599 -     4.622:   97.4586%  (        1)
00:13:16.401      4.622 -     4.646:   97.4665%  (        1)
00:13:16.401      4.646 -     4.670:   97.4743%  (        1)
00:13:16.401      4.717 -     4.741:   97.4822%  (        1)
00:13:16.401      4.741 -     4.764:   97.4980%  (        2)
00:13:16.401      4.764 -     4.788:   97.5217%  (        3)
00:13:16.401      4.788 -     4.812:   97.5533%  (        4)
00:13:16.401      4.812 -     4.836:   97.5848%  (        4)
00:13:16.401      4.836 -     4.859:   97.6085%  (        3)
00:13:16.401      4.859 -     4.883:   97.6401%  (        4)
00:13:16.401      4.883 -     4.907:   97.6875%  (        6)
00:13:16.401      4.907 -     4.930:   97.7585%  (        9)
00:13:16.401      4.930 -     4.954:   97.8216%  (        8)
00:13:16.401      4.954 -     4.978:   97.8611%  (        5)
00:13:16.401      4.978 -     5.001:   97.9163%  (        7)
00:13:16.401      5.001 -     5.025:   97.9400%  (        3)
00:13:16.401      5.025 -     5.049:   97.9874%  (        6)
00:13:16.401      5.049 -     5.073:   98.0189%  (        4)
00:13:16.401      5.073 -     5.096:   98.0505%  (        4)
00:13:16.401      5.096 -     5.120:   98.0663%  (        2)
00:13:16.401      5.120 -     5.144:   98.0900%  (        3)
00:13:16.401      5.144 -     5.167:   98.1294%  (        5)
00:13:16.401      5.167 -     5.191:   98.1452%  (        2)
00:13:16.401      5.191 -     5.215:   98.1768%  (        4)
00:13:16.401      5.215 -     5.239:   98.1847%  (        1)
00:13:16.401      5.239 -     5.262:   98.1926%  (        1)
00:13:16.401      5.262 -     5.286:   98.2242%  (        4)
00:13:16.401      5.286 -     5.310:   98.2320%  (        1)
00:13:16.401      5.333 -     5.357:   98.2478%  (        2)
00:13:16.401      5.357 -     5.381:   98.2557%  (        1)
00:13:16.401      5.404 -     5.428:   98.2636%  (        1)
00:13:16.401      5.428 -     5.452:   98.2873%  (        3)
00:13:16.401      5.452 -     5.476:   98.2952%  (        1)
00:13:16.401      5.476 -     5.499:   98.3031%  (        1)
00:13:16.401      5.499 -     5.523:   98.3110%  (        1)
00:13:16.401      5.523 -     5.547:   98.3189%  (        1)
00:13:16.401      5.594 -     5.618:   98.3268%  (        1)
00:13:16.401      5.784 -     5.807:   98.3346%  (        1)
00:13:16.401      6.068 -     6.116:   98.3504%  (        2)
00:13:16.401      6.116 -     6.163:   98.3583%  (        1)
00:13:16.401      6.163 -     6.210:   98.3662%  (        1)
00:13:16.401      6.258 -     6.305:   98.3741%  (        1)
00:13:16.401      6.637 -     6.684:   98.3820%  (        1)
00:13:16.401      6.732 -     6.779:   98.3899%  (        1)
00:13:16.401      6.779 -     6.827:   98.4057%  (        2)
00:13:16.401      6.921 -     6.969:   98.4136%  (        1)
00:13:16.401      7.016 -     7.064:   98.4215%  (        1)
00:13:16.401      7.064 -     7.111:   98.4294%  (        1)
00:13:16.401      7.111 -     7.159:   98.4373%  (        1)
00:13:16.401      7.159 -     7.206:   98.4451%  (        1)
00:13:16.401      7.301 -     7.348:   98.4609%  (        2)
00:13:16.401      7.348 -     7.396:   98.4688%  (        1)
00:13:16.401      7.396 -     7.443:   98.4846%  (        2)
00:13:16.401      7.443 -     7.490:   98.4925%  (        1)
00:13:16.401      7.490 -     7.538:   98.5083%  (        2)
00:13:16.401      7.538 -     7.585:   98.5162%  (        1)
00:13:16.401      7.585 -     7.633:   98.5241%  (        1)
00:13:16.401      7.633 -     7.680:   98.5320%  (        1)
00:13:16.401      7.775 -     7.822:   98.5399%  (        1)
00:13:16.401      7.870 -     7.917:   98.5478%  (        1)
00:13:16.401      7.917 -     7.964:   98.5635%  (        2)
00:13:16.401      7.964 -     8.012:   98.5714%  (        1)
00:13:16.401      8.012 -     8.059:   98.5793%  (        1)
00:13:16.401      8.059 -     8.107:   98.5951%  (        2)
00:13:16.401      8.107 -     8.154:   98.6030%  (        1)
00:13:16.401      8.154 -     8.201:   98.6109%  (        1)
00:13:16.401      8.201 -     8.249:   98.6267%  (        2)
00:13:16.401      8.249 -     8.296:   98.6504%  (        3)
00:13:16.401      8.344 -     8.391:   98.6582%  (        1)
00:13:16.401      8.439 -     8.486:   98.6661%  (        1)
00:13:16.401      8.486 -     8.533:   98.6740%  (        1)
00:13:16.401      8.723 -     8.770:   98.6898%  (        2)
00:13:16.401      8.770 -     8.818:   98.6977%  (        1)
00:13:16.401      8.913 -     8.960:   98.7056%  (        1)
00:13:16.401      9.007 -     9.055:   98.7214%  (        2)
00:13:16.401      9.102 -     9.150:   98.7372%  (        2)
00:13:16.401      9.197 -     9.244:   98.7451%  (        1)
00:13:16.401      9.244 -     9.292:   98.7530%  (        1)
00:13:16.401      9.529 -     9.576:   98.7687%  (        2)
00:13:16.401      9.813 -     9.861:   98.7766%  (        1)
00:13:16.401      9.908 -     9.956:   98.7924%  (        2)
00:13:16.401     10.193 -    10.240:   98.8003%  (        1)
00:13:16.401     10.335 -    10.382:   98.8082%  (        1)
00:13:16.401     10.572 -    10.619:   98.8240%  (        2)
00:13:16.401     10.667 -    10.714:   98.8319%  (        1)
00:13:16.401     11.046 -    11.093:   98.8398%  (        1)
00:13:16.401     11.236 -    11.283:   98.8477%  (        1)
00:13:16.401     11.473 -    11.520:   98.8556%  (        1)
00:13:16.401     11.852 -    11.899:   98.8635%  (        1)
00:13:16.401     12.326 -    12.421:   98.8713%  (        1)
00:13:16.401     12.610 -    12.705:   98.8792%  (        1)
00:13:16.401     13.084 -    13.179:   98.8871%  (        1)
00:13:16.401     13.369 -    13.464:   98.8950%  (        1)
00:13:16.401     13.559 -    13.653:   98.9029%  (        1)
00:13:16.401     13.653 -    13.748:   98.9108%  (        1)
00:13:16.401     13.843 -    13.938:   98.9187%  (        1)
00:13:16.401     13.938 -    14.033:   98.9266%  (        1)
00:13:16.401     14.127 -    14.222:   98.9345%  (        1)
00:13:16.401     14.317 -    14.412:   98.9424%  (        1)
00:13:16.401     17.067 -    17.161:   98.9503%  (        1)
00:13:16.401     17.351 -    17.446:   98.9661%  (        2)
00:13:16.401     17.446 -    17.541:   99.0134%  (        6)
00:13:16.401     17.541 -    17.636:   99.0371%  (        3)
00:13:16.401     17.636 -    17.730:   99.0608%  (        3)
00:13:16.401     17.730 -    17.825:   99.0766%  (        2)
00:13:16.401     17.825 -    17.920:   99.1318%  (        7)
00:13:16.401     17.920 -    18.015:   99.1949%  (        8)
00:13:16.401     18.015 -    18.110:   99.2660%  (        9)
00:13:16.401     18.110 -    18.204:   99.3133%  (        6)
00:13:16.401     18.204 -    18.299:   99.3923%  (       10)
00:13:16.401     18.299 -    18.394:   99.4791%  (       11)
00:13:16.401     18.394 -    18.489:   99.5343%  (        7)
00:13:16.401     18.489 -    18.584:   99.5738%  (        5)
00:13:16.402     18.584 -    18.679:   99.6448%  (        9)
00:13:16.402     18.679 -    18.773:   99.6764%  (        4)
00:13:16.402     18.773 -    18.868:   99.7080%  (        4)
00:13:16.402     18.868 -    18.963:   99.7474%  (        5)
00:13:16.402     18.963 -    19.058:   99.7711%  (        3)
00:13:16.402     19.058 -    19.153:   99.7948%  (        3)
00:13:16.402     19.342 -    19.437:   99.8106%  (        2)
00:13:16.402     19.437 -    19.532:   99.8185%  (        1)
00:13:16.402     19.627 -    19.721:   99.8264%  (        1)
00:13:16.402     20.006 -    20.101:   99.8343%  (        1)
00:13:16.402     21.902 -    21.997:   99.8421%  (        1)
00:13:16.402     22.661 -    22.756:   99.8500%  (        1)
00:13:16.402     23.514 -    23.609:   99.8579%  (        1)
00:13:16.402     24.462 -    24.652:   99.8658%  (        1)
00:13:16.402     24.841 -    25.031:   99.8737%  (        1)
00:13:16.402   3980.705 -  4004.978:   99.9684%  (       12)
00:13:16.402   4004.978 -  4029.250:  100.0000%  (        4)
00:13:16.402  
00:13:16.402  Complete histogram
00:13:16.402  ==================
00:13:16.402         Range in us     Cumulative     Count
00:13:16.402      2.050 -     2.062:    0.0868%  (       11)
00:13:16.402      2.062 -     2.074:   16.2983%  (     2054)
00:13:16.402      2.074 -     2.086:   38.8398%  (     2856)
00:13:16.402      2.086 -     2.098:   40.6314%  (      227)
00:13:16.402      2.098 -     2.110:   54.1121%  (     1708)
00:13:16.402      2.110 -     2.121:   61.1839%  (      896)
00:13:16.402      2.121 -     2.133:   63.3860%  (      279)
00:13:16.402      2.133 -     2.145:   74.5620%  (     1416)
00:13:16.402      2.145 -     2.157:   79.7001%  (      651)
00:13:16.402      2.157 -     2.169:   81.4286%  (      219)
00:13:16.402      2.169 -     2.181:   86.4483%  (      636)
00:13:16.402      2.181 -     2.193:   87.9874%  (      195)
00:13:16.402      2.193 -     2.204:   88.8556%  (      110)
00:13:16.402      2.204 -     2.216:   90.7498%  (      240)
00:13:16.402      2.216 -     2.228:   93.5359%  (      353)
00:13:16.402      2.228 -     2.240:   94.1910%  (       83)
00:13:16.402      2.240 -     2.252:   94.7672%  (       73)
00:13:16.402      2.252 -     2.264:   94.9803%  (       27)
00:13:16.402      2.264 -     2.276:   95.1066%  (       16)
00:13:16.402      2.276 -     2.287:   95.3907%  (       36)
00:13:16.402      2.287 -     2.299:   95.8406%  (       57)
00:13:16.402      2.299 -     2.311:   95.9511%  (       14)
00:13:16.402      2.311 -     2.323:   96.0221%  (        9)
00:13:16.402      2.323 -     2.335:   96.0773%  (        7)
00:13:16.402      2.335 -     2.347:   96.1010%  (        3)
00:13:16.402      2.347 -     2.359:   96.2273%  (       16)
00:13:16.402      2.359 -     2.370:   96.5272%  (       38)
00:13:16.402      2.370 -     2.382:   96.9850%  (       58)
00:13:16.402      2.382 -     2.394:   97.3244%  (       43)
00:13:16.402      2.394 -     2.406:   97.6243%  (       38)
00:13:16.402      2.406 -     2.418:   97.7979%  (       22)
00:13:16.402      2.418 -     2.430:   97.9558%  (       20)
00:13:16.402      2.430 -     2.441:   98.0663%  (       14)
00:13:16.402      2.441 -     2.453:   98.1531%  (       11)
00:13:16.402      2.453 -     2.465:   98.1847%  (        4)
00:13:16.402      2.465 -     2.477:   98.2399%  (        7)
00:13:16.402      2.477 -     2.489:   98.2873%  (        6)
00:13:16.402      2.489 -     2.501:   98.3189%  (        4)
00:13:16.402      2.501 -     2.513:   98.3425%  (        3)
00:13:16.402      2.513 -     2.524:   98.3583%  (        2)
00:13:16.402      2.524 -     2.536:   98.3820%  (        3)
00:13:16.402      2.560 -     2.572:   98.3899%  (        1)
00:13:16.402      2.607 -     2.619:   98.3978%  (        1)
00:13:16.402      2.619 -     2.631:   98.4057%  (        1)
00:13:16.402      2.655 -     2.667:   98.4215%  (        2)
00:13:16.402      2.690 -     2.702:   98.4373%  (        2)
00:13:16.402      2.714 -     2.726:   98.4530%  (        2)
00:13:16.402      2.761 -     2.773:   98.4609%  (        1)
00:13:16.402      2.892 -     2.904:   98.4688%  (        1)
00:13:16.402      3.034 -     3.058:   98.4767%  (        1)
00:13:16.402      3.319 -     3.342:   98.4846%  (        1)
00:13:16.402      3.390 -     3.413:   98.4925%  (        1)
00:13:16.402      3.461 -     3.484:   98.5162%  (        3)
00:13:16.402      3.484 -     3.508:   98.5241%  (        1)
00:13:16.402      3.508 -     3.532:   98.5478%  (        3)
00:13:16.402      3.532 -     3.556:   98.5556%  (        1)
00:13:16.402      3.556 -     3.579:   98.5635%  (        1)
00:13:16.402      3.627 -     3.650:   98.5714%  (        1)
00:13:16.402      3.793 -     3.816:   98.5793%  (        1)
00:13:16.402      4.077 -     4.101:   98.5872%  (        1)
00:13:16.402      4.124 -     4.148:   98.5951%  (        1)
00:13:16.402      4.243 -     4.267:   98.6030%  (        1)
00:13:16.402      4.267 -     4.290:   98.6109%  (        1)
00:13:16.402      5.120 -     5.144:   98.6188%  (        1)
00:13:16.402      5.333 -     5.357:   98.6267%  (        1)
00:13:16.402      5.452 -     5.476:   98.6346%  (        1)
00:13:16.402      5.476 -     5.499:   98.6425%  (        1)
00:13:16.402      5.523 -     5.547:   98.6504%  (        1)
00:13:16.402      5.879 -     5.902:   98.6661%  (        2)
00:13:16.402      5.902 -     5.926:   98.6740%  (        1)
00:13:16.402      5.950 -     5.973:   98.6819%  (        1)
00:13:16.402      6.258 -     6.305:   98.6898%  (        1)
00:13:16.402      6.305 -     6.353:   98.6977%  (        1)
00:13:16.402      6.400 -     6.447:   98.7056%  (        1)
00:13:16.402      6.542 -     6.590:   98.7135%  (        1)
00:13:16.402      6.590 -     6.637:   9[2024-12-09 04:03:44.685006] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:13:16.402  8.7214%  (        1)
00:13:16.402      6.732 -     6.779:   98.7293%  (        1)
00:13:16.402      6.779 -     6.827:   98.7372%  (        1)
00:13:16.402      6.827 -     6.874:   98.7451%  (        1)
00:13:16.402      6.921 -     6.969:   98.7530%  (        1)
00:13:16.402      6.969 -     7.016:   98.7609%  (        1)
00:13:16.402      7.253 -     7.301:   98.7687%  (        1)
00:13:16.402      7.396 -     7.443:   98.7766%  (        1)
00:13:16.402      7.490 -     7.538:   98.7845%  (        1)
00:13:16.402      9.766 -     9.813:   98.7924%  (        1)
00:13:16.402     15.644 -    15.739:   98.8003%  (        1)
00:13:16.402     15.739 -    15.834:   98.8082%  (        1)
00:13:16.402     15.834 -    15.929:   98.8477%  (        5)
00:13:16.402     15.929 -    16.024:   98.8950%  (        6)
00:13:16.402     16.024 -    16.119:   98.9661%  (        9)
00:13:16.402     16.119 -    16.213:   98.9740%  (        1)
00:13:16.402     16.213 -    16.308:   98.9897%  (        2)
00:13:16.402     16.308 -    16.403:   99.0055%  (        2)
00:13:16.402     16.403 -    16.498:   99.0134%  (        1)
00:13:16.402     16.498 -    16.593:   99.0845%  (        9)
00:13:16.402     16.593 -    16.687:   99.1160%  (        4)
00:13:16.402     16.687 -    16.782:   99.1476%  (        4)
00:13:16.402     16.782 -    16.877:   99.1792%  (        4)
00:13:16.402     16.877 -    16.972:   99.2028%  (        3)
00:13:16.402     16.972 -    17.067:   99.2265%  (        3)
00:13:16.402     17.067 -    17.161:   99.2344%  (        1)
00:13:16.402     17.161 -    17.256:   99.2581%  (        3)
00:13:16.402     17.351 -    17.446:   99.2660%  (        1)
00:13:16.402     17.541 -    17.636:   99.2739%  (        1)
00:13:16.402     17.636 -    17.730:   99.2818%  (        1)
00:13:16.402     17.825 -    17.920:   99.3054%  (        3)
00:13:16.402     17.920 -    18.015:   99.3133%  (        1)
00:13:16.402     18.204 -    18.299:   99.3212%  (        1)
00:13:16.402     18.299 -    18.394:   99.3291%  (        1)
00:13:16.402     18.394 -    18.489:   99.3370%  (        1)
00:13:16.402     18.868 -    18.963:   99.3449%  (        1)
00:13:16.402     27.307 -    27.496:   99.3528%  (        1)
00:13:16.402   2415.123 -  2427.259:   99.3607%  (        1)
00:13:16.402   2597.167 -  2609.304:   99.3686%  (        1)
00:13:16.402   3980.705 -  4004.978:   99.8185%  (       57)
00:13:16.402   4004.978 -  4029.250:   99.9763%  (       20)
00:13:16.402   4029.250 -  4053.523:   99.9842%  (        1)
00:13:16.402   4975.881 -  5000.154:   99.9921%  (        1)
00:13:16.402   5000.154 -  5024.427:  100.0000%  (        1)
00:13:16.402  
00:13:16.402   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2
00:13:16.402   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2
00:13:16.402   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2
00:13:16.402   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4
00:13:16.402   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:13:16.660  [
00:13:16.660    {
00:13:16.660      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:13:16.660      "subtype": "Discovery",
00:13:16.660      "listen_addresses": [],
00:13:16.660      "allow_any_host": true,
00:13:16.660      "hosts": []
00:13:16.660    },
00:13:16.660    {
00:13:16.660      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:13:16.660      "subtype": "NVMe",
00:13:16.660      "listen_addresses": [
00:13:16.660        {
00:13:16.660          "trtype": "VFIOUSER",
00:13:16.660          "adrfam": "IPv4",
00:13:16.660          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:13:16.660          "trsvcid": "0"
00:13:16.660        }
00:13:16.660      ],
00:13:16.660      "allow_any_host": true,
00:13:16.660      "hosts": [],
00:13:16.660      "serial_number": "SPDK1",
00:13:16.660      "model_number": "SPDK bdev Controller",
00:13:16.660      "max_namespaces": 32,
00:13:16.660      "min_cntlid": 1,
00:13:16.660      "max_cntlid": 65519,
00:13:16.660      "namespaces": [
00:13:16.660        {
00:13:16.660          "nsid": 1,
00:13:16.660          "bdev_name": "Malloc1",
00:13:16.660          "name": "Malloc1",
00:13:16.660          "nguid": "07D9A539FF234D2C94FF04FF7F2B2437",
00:13:16.660          "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437"
00:13:16.660        },
00:13:16.660        {
00:13:16.660          "nsid": 2,
00:13:16.660          "bdev_name": "Malloc3",
00:13:16.660          "name": "Malloc3",
00:13:16.660          "nguid": "FE3543652DAC4D0FB8FA008A85669FA7",
00:13:16.660          "uuid": "fe354365-2dac-4d0f-b8fa-008a85669fa7"
00:13:16.660        }
00:13:16.660      ]
00:13:16.660    },
00:13:16.660    {
00:13:16.660      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:13:16.660      "subtype": "NVMe",
00:13:16.660      "listen_addresses": [
00:13:16.660        {
00:13:16.661          "trtype": "VFIOUSER",
00:13:16.661          "adrfam": "IPv4",
00:13:16.661          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:13:16.661          "trsvcid": "0"
00:13:16.661        }
00:13:16.661      ],
00:13:16.661      "allow_any_host": true,
00:13:16.661      "hosts": [],
00:13:16.661      "serial_number": "SPDK2",
00:13:16.661      "model_number": "SPDK bdev Controller",
00:13:16.661      "max_namespaces": 32,
00:13:16.661      "min_cntlid": 1,
00:13:16.661      "max_cntlid": 65519,
00:13:16.661      "namespaces": [
00:13:16.661        {
00:13:16.661          "nsid": 1,
00:13:16.661          "bdev_name": "Malloc2",
00:13:16.661          "name": "Malloc2",
00:13:16.661          "nguid": "1F14A502DA0A41F2920C11B007901159",
00:13:16.661          "uuid": "1f14a502-da0a-41f2-920c-11b007901159"
00:13:16.661        }
00:13:16.661      ]
00:13:16.661    }
00:13:16.661  ]
00:13:16.661   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:13:16.661   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=210264
00:13:16.661   04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user2/2 		subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:13:16.661  [2024-12-09 04:03:45.173788] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:13:16.661   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4
00:13:17.225  Malloc4
00:13:17.225   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2
00:13:17.483  [2024-12-09 04:03:45.806648] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:13:17.483   04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:13:17.483  Asynchronous Event Request test
00:13:17.483  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:17.483  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:17.483  Registering asynchronous event callbacks...
00:13:17.483  Starting namespace attribute notice tests for all controllers...
00:13:17.483  /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:13:17.483  aer_cb - Changed Namespace
00:13:17.483  Cleaning up...
00:13:17.741  [
00:13:17.741    {
00:13:17.741      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:13:17.741      "subtype": "Discovery",
00:13:17.741      "listen_addresses": [],
00:13:17.741      "allow_any_host": true,
00:13:17.741      "hosts": []
00:13:17.741    },
00:13:17.741    {
00:13:17.741      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:13:17.741      "subtype": "NVMe",
00:13:17.741      "listen_addresses": [
00:13:17.741        {
00:13:17.741          "trtype": "VFIOUSER",
00:13:17.741          "adrfam": "IPv4",
00:13:17.741          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:13:17.741          "trsvcid": "0"
00:13:17.741        }
00:13:17.741      ],
00:13:17.741      "allow_any_host": true,
00:13:17.741      "hosts": [],
00:13:17.741      "serial_number": "SPDK1",
00:13:17.741      "model_number": "SPDK bdev Controller",
00:13:17.741      "max_namespaces": 32,
00:13:17.741      "min_cntlid": 1,
00:13:17.741      "max_cntlid": 65519,
00:13:17.741      "namespaces": [
00:13:17.741        {
00:13:17.741          "nsid": 1,
00:13:17.741          "bdev_name": "Malloc1",
00:13:17.741          "name": "Malloc1",
00:13:17.741          "nguid": "07D9A539FF234D2C94FF04FF7F2B2437",
00:13:17.741          "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437"
00:13:17.741        },
00:13:17.741        {
00:13:17.741          "nsid": 2,
00:13:17.741          "bdev_name": "Malloc3",
00:13:17.741          "name": "Malloc3",
00:13:17.741          "nguid": "FE3543652DAC4D0FB8FA008A85669FA7",
00:13:17.741          "uuid": "fe354365-2dac-4d0f-b8fa-008a85669fa7"
00:13:17.741        }
00:13:17.741      ]
00:13:17.741    },
00:13:17.741    {
00:13:17.741      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:13:17.741      "subtype": "NVMe",
00:13:17.741      "listen_addresses": [
00:13:17.741        {
00:13:17.741          "trtype": "VFIOUSER",
00:13:17.741          "adrfam": "IPv4",
00:13:17.741          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:13:17.741          "trsvcid": "0"
00:13:17.741        }
00:13:17.741      ],
00:13:17.741      "allow_any_host": true,
00:13:17.741      "hosts": [],
00:13:17.741      "serial_number": "SPDK2",
00:13:17.741      "model_number": "SPDK bdev Controller",
00:13:17.741      "max_namespaces": 32,
00:13:17.741      "min_cntlid": 1,
00:13:17.741      "max_cntlid": 65519,
00:13:17.741      "namespaces": [
00:13:17.741        {
00:13:17.741          "nsid": 1,
00:13:17.741          "bdev_name": "Malloc2",
00:13:17.741          "name": "Malloc2",
00:13:17.741          "nguid": "1F14A502DA0A41F2920C11B007901159",
00:13:17.741          "uuid": "1f14a502-da0a-41f2-920c-11b007901159"
00:13:17.741        },
00:13:17.741        {
00:13:17.741          "nsid": 2,
00:13:17.741          "bdev_name": "Malloc4",
00:13:17.741          "name": "Malloc4",
00:13:17.741          "nguid": "2D95B702C2E440178768BBC44C8575A9",
00:13:17.741          "uuid": "2d95b702-c2e4-4017-8768-bbc44c8575a9"
00:13:17.741        }
00:13:17.741      ]
00:13:17.742    }
00:13:17.742  ]
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 210264
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 203925
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 203925 ']'
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 203925
00:13:17.742    04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:17.742    04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203925
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203925'
00:13:17.742  killing process with pid 203925
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 203925
00:13:17.742   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 203925
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I'
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I'
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=210412
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 210412'
00:13:17.999  Process pid: 210412
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 210412
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 210412 ']'
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:17.999  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:17.999   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:13:17.999  [2024-12-09 04:03:46.499644] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:13:17.999  [2024-12-09 04:03:46.500691] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:13:17.999  [2024-12-09 04:03:46.500750] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:17.999  [2024-12-09 04:03:46.565017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:18.257  [2024-12-09 04:03:46.619207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:18.257  [2024-12-09 04:03:46.619270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:18.257  [2024-12-09 04:03:46.619306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:13:18.257  [2024-12-09 04:03:46.619318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:13:18.257  [2024-12-09 04:03:46.619328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:18.257  [2024-12-09 04:03:46.620718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:18.257  [2024-12-09 04:03:46.620779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:18.257  [2024-12-09 04:03:46.620846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:13:18.257  [2024-12-09 04:03:46.620849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:18.257  [2024-12-09 04:03:46.704020] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:13:18.257  [2024-12-09 04:03:46.704263] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:13:18.257  [2024-12-09 04:03:46.704580] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:13:18.257  [2024-12-09 04:03:46.705209] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:13:18.257  [2024-12-09 04:03:46.705458] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:13:18.257   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:18.257   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0
00:13:18.257   04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:13:19.189   04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I
00:13:19.755   04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:13:19.755    04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:13:19.755   04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:19.755   04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:13:19.755   04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:13:20.015  Malloc1
00:13:20.015   04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:13:20.273   04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:13:20.532   04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:13:20.792   04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:20.792   04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:13:20.792   04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:13:21.356  Malloc2
00:13:21.356   04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:13:21.614   04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:13:21.871   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 210412
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 210412 ']'
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 210412
00:13:22.128    04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:22.128    04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210412
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210412'
00:13:22.128  killing process with pid 210412
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 210412
00:13:22.128   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 210412
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:13:22.386  
00:13:22.386  real	0m54.165s
00:13:22.386  user	3m29.209s
00:13:22.386  sys	0m4.032s
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:13:22.386  ************************************
00:13:22.386  END TEST nvmf_vfio_user
00:13:22.386  ************************************
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:13:22.386  ************************************
00:13:22.386  START TEST nvmf_vfio_user_nvme_compliance
00:13:22.386  ************************************
00:13:22.386   04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:13:22.386  * Looking for test storage...
00:13:22.386  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance
00:13:22.386    04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:22.386     04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version
00:13:22.386     04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-:
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-:
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<'
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:22.645  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:22.645  		--rc genhtml_branch_coverage=1
00:13:22.645  		--rc genhtml_function_coverage=1
00:13:22.645  		--rc genhtml_legend=1
00:13:22.645  		--rc geninfo_all_blocks=1
00:13:22.645  		--rc geninfo_unexecuted_blocks=1
00:13:22.645  		
00:13:22.645  		'
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:22.645  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:22.645  		--rc genhtml_branch_coverage=1
00:13:22.645  		--rc genhtml_function_coverage=1
00:13:22.645  		--rc genhtml_legend=1
00:13:22.645  		--rc geninfo_all_blocks=1
00:13:22.645  		--rc geninfo_unexecuted_blocks=1
00:13:22.645  		
00:13:22.645  		'
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:22.645  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:22.645  		--rc genhtml_branch_coverage=1
00:13:22.645  		--rc genhtml_function_coverage=1
00:13:22.645  		--rc genhtml_legend=1
00:13:22.645  		--rc geninfo_all_blocks=1
00:13:22.645  		--rc geninfo_unexecuted_blocks=1
00:13:22.645  		
00:13:22.645  		'
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:22.645  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:22.645  		--rc genhtml_branch_coverage=1
00:13:22.645  		--rc genhtml_function_coverage=1
00:13:22.645  		--rc genhtml_legend=1
00:13:22.645  		--rc geninfo_all_blocks=1
00:13:22.645  		--rc geninfo_unexecuted_blocks=1
00:13:22.645  		
00:13:22.645  		'
00:13:22.645   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:13:22.645    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob
00:13:22.645     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:22.646     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:22.646     04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:22.646      04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:22.646      04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:22.646      04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:22.646      04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH
00:13:22.646      04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:13:22.646  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:13:22.646    04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=211026
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 211026'
00:13:22.646  Process pid: 211026
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 211026
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 211026 ']'
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:22.646  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:22.646   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:13:22.646  [2024-12-09 04:03:51.100780] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:13:22.646  [2024-12-09 04:03:51.100858] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:22.646  [2024-12-09 04:03:51.166869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:13:22.904  [2024-12-09 04:03:51.222441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:22.904  [2024-12-09 04:03:51.222493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:22.904  [2024-12-09 04:03:51.222520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:13:22.904  [2024-12-09 04:03:51.222531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:13:22.904  [2024-12-09 04:03:51.222540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:22.904  [2024-12-09 04:03:51.223986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:22.904  [2024-12-09 04:03:51.224053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:22.904  [2024-12-09 04:03:51.224057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:22.904   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:22.904   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0
00:13:22.904   04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:13:23.837  malloc0
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:23.837   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:13:24.095   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:24.095   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:13:24.095   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:24.095   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:13:24.095   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:24.095   04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0'
00:13:24.095  
00:13:24.095  
00:13:24.095       CUnit - A unit testing framework for C - Version 2.1-3
00:13:24.095       http://cunit.sourceforge.net/
00:13:24.095  
00:13:24.095  
00:13:24.095  Suite: nvme_compliance
00:13:24.095    Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 04:03:52.602780] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:24.095  [2024-12-09 04:03:52.604321] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining
00:13:24.095  [2024-12-09 04:03:52.604355] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed
00:13:24.095  [2024-12-09 04:03:52.604369] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed
00:13:24.095  [2024-12-09 04:03:52.605794] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:24.095  passed
00:13:24.353    Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 04:03:52.692416] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:24.353  [2024-12-09 04:03:52.695438] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:24.353  passed
00:13:24.353    Test: admin_identify_ns ...[2024-12-09 04:03:52.780802] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:24.353  [2024-12-09 04:03:52.840303] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:13:24.353  [2024-12-09 04:03:52.848293] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:13:24.353  [2024-12-09 04:03:52.869403] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:24.353  passed
00:13:24.611    Test: admin_get_features_mandatory_features ...[2024-12-09 04:03:52.952913] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:24.611  [2024-12-09 04:03:52.958945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:24.611  passed
00:13:24.611    Test: admin_get_features_optional_features ...[2024-12-09 04:03:53.043519] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:24.611  [2024-12-09 04:03:53.046539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:24.611  passed
00:13:24.611    Test: admin_set_features_number_of_queues ...[2024-12-09 04:03:53.128783] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:24.869  [2024-12-09 04:03:53.233390] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:24.869  passed
00:13:24.869    Test: admin_get_log_page_mandatory_logs ...[2024-12-09 04:03:53.316942] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:24.869  [2024-12-09 04:03:53.319962] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:24.869  passed
00:13:24.869    Test: admin_get_log_page_with_lpo ...[2024-12-09 04:03:53.398754] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.127  [2024-12-09 04:03:53.470292] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512)
00:13:25.127  [2024-12-09 04:03:53.483345] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.127  passed
00:13:25.127    Test: fabric_property_get ...[2024-12-09 04:03:53.562946] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.127  [2024-12-09 04:03:53.564224] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed
00:13:25.127  [2024-12-09 04:03:53.565967] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.127  passed
00:13:25.127    Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 04:03:53.651532] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.127  [2024-12-09 04:03:53.652855] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist
00:13:25.127  [2024-12-09 04:03:53.654567] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.127  passed
00:13:25.386    Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 04:03:53.736787] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.386  [2024-12-09 04:03:53.820280] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:13:25.386  [2024-12-09 04:03:53.836282] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:13:25.386  [2024-12-09 04:03:53.841391] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.386  passed
00:13:25.386    Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 04:03:53.924928] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.386  [2024-12-09 04:03:53.926251] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist
00:13:25.386  [2024-12-09 04:03:53.927946] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.386  passed
00:13:25.643    Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 04:03:54.010123] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.643  [2024-12-09 04:03:54.087286] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:13:25.643  [2024-12-09 04:03:54.111281] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:13:25.643  [2024-12-09 04:03:54.116394] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.643  passed
00:13:25.643    Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 04:03:54.198900] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.643  [2024-12-09 04:03:54.200230] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big
00:13:25.643  [2024-12-09 04:03:54.200282] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported
00:13:25.643  [2024-12-09 04:03:54.201924] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.901  passed
00:13:25.901    Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 04:03:54.286218] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:25.901  [2024-12-09 04:03:54.383289] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1
00:13:25.901  [2024-12-09 04:03:54.391294] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257
00:13:25.901  [2024-12-09 04:03:54.399287] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0
00:13:25.901  [2024-12-09 04:03:54.407289] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128
00:13:25.901  [2024-12-09 04:03:54.436380] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:25.901  passed
00:13:26.159    Test: admin_create_io_sq_verify_pc ...[2024-12-09 04:03:54.518609] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:26.159  [2024-12-09 04:03:54.534311] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported
00:13:26.159  [2024-12-09 04:03:54.551971] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:26.159  passed
00:13:26.159    Test: admin_create_io_qp_max_qps ...[2024-12-09 04:03:54.637589] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:27.544  [2024-12-09 04:03:55.738306] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs
00:13:27.801  [2024-12-09 04:03:56.132934] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:27.801  passed
00:13:27.801    Test: admin_create_io_sq_shared_cq ...[2024-12-09 04:03:56.216337] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:13:27.801  [2024-12-09 04:03:56.347286] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:13:28.058  [2024-12-09 04:03:56.387372] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:13:28.058  passed
00:13:28.058  
00:13:28.058  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:13:28.058                suites      1      1    n/a      0        0
00:13:28.058                 tests     18     18     18      0        0
00:13:28.058               asserts    360    360    360      0      n/a
00:13:28.058  
00:13:28.058  Elapsed time =    1.570 seconds
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 211026
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 211026 ']'
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 211026
00:13:28.058    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:28.058    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211026
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211026'
00:13:28.058  killing process with pid 211026
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 211026
00:13:28.058   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 211026
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:13:28.316  
00:13:28.316  real	0m5.854s
00:13:28.316  user	0m16.447s
00:13:28.316  sys	0m0.532s
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:13:28.316  ************************************
00:13:28.316  END TEST nvmf_vfio_user_nvme_compliance
00:13:28.316  ************************************
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:13:28.316  ************************************
00:13:28.316  START TEST nvmf_vfio_user_fuzz
00:13:28.316  ************************************
00:13:28.316   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:13:28.316  * Looking for test storage...
00:13:28.316  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:13:28.316    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:28.316     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version
00:13:28.316     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-:
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-:
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<'
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:28.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.575  		--rc genhtml_branch_coverage=1
00:13:28.575  		--rc genhtml_function_coverage=1
00:13:28.575  		--rc genhtml_legend=1
00:13:28.575  		--rc geninfo_all_blocks=1
00:13:28.575  		--rc geninfo_unexecuted_blocks=1
00:13:28.575  		
00:13:28.575  		'
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:28.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.575  		--rc genhtml_branch_coverage=1
00:13:28.575  		--rc genhtml_function_coverage=1
00:13:28.575  		--rc genhtml_legend=1
00:13:28.575  		--rc geninfo_all_blocks=1
00:13:28.575  		--rc geninfo_unexecuted_blocks=1
00:13:28.575  		
00:13:28.575  		'
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:28.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.575  		--rc genhtml_branch_coverage=1
00:13:28.575  		--rc genhtml_function_coverage=1
00:13:28.575  		--rc genhtml_legend=1
00:13:28.575  		--rc geninfo_all_blocks=1
00:13:28.575  		--rc geninfo_unexecuted_blocks=1
00:13:28.575  		
00:13:28.575  		'
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:28.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.575  		--rc genhtml_branch_coverage=1
00:13:28.575  		--rc genhtml_function_coverage=1
00:13:28.575  		--rc genhtml_legend=1
00:13:28.575  		--rc geninfo_all_blocks=1
00:13:28.575  		--rc geninfo_unexecuted_blocks=1
00:13:28.575  		
00:13:28.575  		'
00:13:28.575   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:13:28.575    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:28.575     04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:28.575      04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.575      04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.575      04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.575      04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH
00:13:28.575      04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:13:28.576  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:13:28.576    04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=211846
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 211846'
00:13:28.576  Process pid: 211846
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 211846
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 211846 ']'
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:28.576  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:28.576   04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:13:28.834   04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:28.834   04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0
00:13:28.834   04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:13:29.766  malloc0
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user'
00:13:29.766   04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a
00:14:01.823  Fuzzing completed. Shutting down the fuzz application
00:14:01.823  
00:14:01.823  Dumping successful admin opcodes:
00:14:01.823  9, 10, 
00:14:01.823  Dumping successful io opcodes:
00:14:01.823  0, 
00:14:01.823  NS: 0x20000081ef00 I/O qp, Total commands completed: 676156, total successful commands: 2634, random_seed: 3906279744
00:14:01.823  NS: 0x20000081ef00 admin qp, Total commands completed: 124240, total successful commands: 29, random_seed: 965392896
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 211846
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 211846 ']'
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 211846
00:14:01.823    04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:01.823    04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211846
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211846'
00:14:01.823  killing process with pid 211846
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 211846
00:14:01.823   04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 211846
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT
00:14:01.823  
00:14:01.823  real	0m32.278s
00:14:01.823  user	0m33.054s
00:14:01.823  sys	0m26.054s
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:14:01.823  ************************************
00:14:01.823  END TEST nvmf_vfio_user_fuzz
00:14:01.823  ************************************
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:14:01.823  ************************************
00:14:01.823  START TEST nvmf_auth_target
00:14:01.823  ************************************
00:14:01.823   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp
00:14:01.823  * Looking for test storage...
00:14:01.823  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-:
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-:
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<'
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:01.823     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:01.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:01.823  		--rc genhtml_branch_coverage=1
00:14:01.823  		--rc genhtml_function_coverage=1
00:14:01.823  		--rc genhtml_legend=1
00:14:01.823  		--rc geninfo_all_blocks=1
00:14:01.823  		--rc geninfo_unexecuted_blocks=1
00:14:01.823  		
00:14:01.823  		'
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:01.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:01.823  		--rc genhtml_branch_coverage=1
00:14:01.823  		--rc genhtml_function_coverage=1
00:14:01.823  		--rc genhtml_legend=1
00:14:01.823  		--rc geninfo_all_blocks=1
00:14:01.823  		--rc geninfo_unexecuted_blocks=1
00:14:01.823  		
00:14:01.823  		'
00:14:01.823    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:01.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:01.823  		--rc genhtml_branch_coverage=1
00:14:01.823  		--rc genhtml_function_coverage=1
00:14:01.823  		--rc genhtml_legend=1
00:14:01.823  		--rc geninfo_all_blocks=1
00:14:01.823  		--rc geninfo_unexecuted_blocks=1
00:14:01.823  		
00:14:01.823  		'
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:01.824  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:01.824  		--rc genhtml_branch_coverage=1
00:14:01.824  		--rc genhtml_function_coverage=1
00:14:01.824  		--rc genhtml_legend=1
00:14:01.824  		--rc geninfo_all_blocks=1
00:14:01.824  		--rc geninfo_unexecuted_blocks=1
00:14:01.824  		
00:14:01.824  		'
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:14:01.824     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:01.824     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:14:01.824     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob
00:14:01.824     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:01.824     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:01.824     04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:01.824      04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:01.824      04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:01.824      04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:01.824      04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH
00:14:01.824      04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:14:01.824  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=()
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=()
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:01.824    04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable
00:14:01.824   04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=()
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=()
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=()
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=()
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=()
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:14:03.203   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:14:03.204  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:14:03.204  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:14:03.204  Found net devices under 0000:0a:00.0: cvl_0_0
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:14:03.204  Found net devices under 0000:0a:00.1: cvl_0_1
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:14:03.204  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:14:03.204  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms
00:14:03.204  
00:14:03.204  --- 10.0.0.2 ping statistics ---
00:14:03.204  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:03.204  rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:14:03.204  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:14:03.204  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms
00:14:03.204  
00:14:03.204  --- 10.0.0.1 ping statistics ---
00:14:03.204  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:03.204  rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=217205
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 217205
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 217205 ']'
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:03.204   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=217228
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:14:03.463     04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4e5463775e454d924c31e6b543edc7658848ec1a7123d40c
00:14:03.463     04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PM3
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4e5463775e454d924c31e6b543edc7658848ec1a7123d40c 0
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4e5463775e454d924c31e6b543edc7658848ec1a7123d40c 0
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4e5463775e454d924c31e6b543edc7658848ec1a7123d40c
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PM3
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PM3
00:14:03.463   04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.PM3
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:14:03.463     04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83
00:14:03.463     04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RCN
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83 3
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83 3
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:14:03.463    04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RCN
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RCN
00:14:03.463   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.RCN
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:14:03.463     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:14:03.463    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dcf8b536e14513688d8f498e3a27e6f7
00:14:03.463     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:14:03.722    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tVR
00:14:03.722    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dcf8b536e14513688d8f498e3a27e6f7 1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dcf8b536e14513688d8f498e3a27e6f7 1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dcf8b536e14513688d8f498e3a27e6f7
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tVR
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tVR
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tVR
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc838851ed954d997310cb760184d0df116e421cdcbf58de
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1NG
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc838851ed954d997310cb760184d0df116e421cdcbf58de 2
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc838851ed954d997310cb760184d0df116e421cdcbf58de 2
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc838851ed954d997310cb760184d0df116e421cdcbf58de
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1NG
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1NG
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1NG
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.r4l
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b 2
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b 2
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.r4l
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.r4l
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.r4l
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=38c08662acc3c6e7796d807a72edb3e3
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Fwb
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 38c08662acc3c6e7796d807a72edb3e3 1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 38c08662acc3c6e7796d807a72edb3e3 1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=38c08662acc3c6e7796d807a72edb3e3
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Fwb
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Fwb
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Fwb
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70
00:14:03.723     04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.y9n
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70 3
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70 3
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.y9n
00:14:03.723    04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.y9n
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.y9n
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]=
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 217205
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 217205 ']'
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:03.723  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:03.723   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 217228 /var/tmp/host.sock
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 217228 ']'
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:14:04.290  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PM3
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.PM3
00:14:04.290   04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.PM3
00:14:04.548   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.RCN ]]
00:14:04.548   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN
00:14:04.548   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:04.548   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:04.805   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:04.805   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN
00:14:04.805   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN
00:14:05.062   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:14:05.062   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR
00:14:05.062   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:05.062   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:05.062   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:05.062   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR
00:14:05.062   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR
00:14:05.319   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1NG ]]
00:14:05.319   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG
00:14:05.319   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:05.319   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:05.319   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:05.319   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG
00:14:05.319   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG
00:14:05.576   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:14:05.576   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l
00:14:05.576   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:05.576   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:05.576   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:05.576   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l
00:14:05.576   04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l
00:14:05.833   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Fwb ]]
00:14:05.833   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb
00:14:05.833   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:05.833   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:05.833   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:05.833   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb
00:14:05.833   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb
00:14:06.090   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:14:06.090   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n
00:14:06.090   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.090   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:06.090   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.090   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n
00:14:06.090   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n
00:14:06.346   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]]
00:14:06.346   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:14:06.346   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:14:06.346   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:06.346   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:06.346   04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:06.603   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:06.860  
00:14:06.860    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:06.860    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:06.860    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:07.117   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:07.117    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:07.117    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:07.117    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:07.374    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:07.374   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:07.374  {
00:14:07.374  "cntlid": 1,
00:14:07.374  "qid": 0,
00:14:07.374  "state": "enabled",
00:14:07.374  "thread": "nvmf_tgt_poll_group_000",
00:14:07.374  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:07.374  "listen_address": {
00:14:07.374  "trtype": "TCP",
00:14:07.374  "adrfam": "IPv4",
00:14:07.374  "traddr": "10.0.0.2",
00:14:07.374  "trsvcid": "4420"
00:14:07.374  },
00:14:07.374  "peer_address": {
00:14:07.374  "trtype": "TCP",
00:14:07.374  "adrfam": "IPv4",
00:14:07.374  "traddr": "10.0.0.1",
00:14:07.374  "trsvcid": "60866"
00:14:07.374  },
00:14:07.374  "auth": {
00:14:07.374  "state": "completed",
00:14:07.374  "digest": "sha256",
00:14:07.374  "dhgroup": "null"
00:14:07.374  }
00:14:07.374  }
00:14:07.374  ]'
00:14:07.374    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:07.374   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:07.374    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:07.375   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:14:07.375    04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:07.375   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:07.375   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:07.375   04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:07.632   04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:07.632   04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:12.893  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:12.893   04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:12.893  
00:14:12.893    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:12.893    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:12.893    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:13.151   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:13.151    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:13.151    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:13.151    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:13.151    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:13.151   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:13.151  {
00:14:13.151  "cntlid": 3,
00:14:13.151  "qid": 0,
00:14:13.151  "state": "enabled",
00:14:13.151  "thread": "nvmf_tgt_poll_group_000",
00:14:13.151  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:13.151  "listen_address": {
00:14:13.151  "trtype": "TCP",
00:14:13.151  "adrfam": "IPv4",
00:14:13.151  "traddr": "10.0.0.2",
00:14:13.151  "trsvcid": "4420"
00:14:13.151  },
00:14:13.151  "peer_address": {
00:14:13.151  "trtype": "TCP",
00:14:13.151  "adrfam": "IPv4",
00:14:13.151  "traddr": "10.0.0.1",
00:14:13.151  "trsvcid": "52846"
00:14:13.151  },
00:14:13.151  "auth": {
00:14:13.151  "state": "completed",
00:14:13.151  "digest": "sha256",
00:14:13.151  "dhgroup": "null"
00:14:13.151  }
00:14:13.151  }
00:14:13.151  ]'
00:14:13.151    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:13.151   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:13.151    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:13.151   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:14:13.151    04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:13.151   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:13.151   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:13.151   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:13.409   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:13.409   04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:14.342  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:14.342   04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:14.600   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:14.857  
00:14:15.114    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:15.114    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:15.114    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:15.371   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:15.371    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:15.371    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:15.371    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:15.371    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:15.371   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:15.371  {
00:14:15.371  "cntlid": 5,
00:14:15.371  "qid": 0,
00:14:15.371  "state": "enabled",
00:14:15.371  "thread": "nvmf_tgt_poll_group_000",
00:14:15.371  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:15.371  "listen_address": {
00:14:15.371  "trtype": "TCP",
00:14:15.371  "adrfam": "IPv4",
00:14:15.371  "traddr": "10.0.0.2",
00:14:15.371  "trsvcid": "4420"
00:14:15.371  },
00:14:15.371  "peer_address": {
00:14:15.371  "trtype": "TCP",
00:14:15.371  "adrfam": "IPv4",
00:14:15.371  "traddr": "10.0.0.1",
00:14:15.371  "trsvcid": "52872"
00:14:15.371  },
00:14:15.371  "auth": {
00:14:15.371  "state": "completed",
00:14:15.371  "digest": "sha256",
00:14:15.371  "dhgroup": "null"
00:14:15.371  }
00:14:15.371  }
00:14:15.371  ]'
00:14:15.371    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:15.371   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:15.371    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:15.371   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:14:15.371    04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:15.371   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:15.371   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:15.371   04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:15.628   04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:15.628   04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:16.560   04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:16.560  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:16.560   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:16.560   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:16.560   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:16.560   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:16.560   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:16.560   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:16.560   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:16.817   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:17.074  
00:14:17.074    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:17.074    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:17.074    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:17.331   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:17.331    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:17.331    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:17.331    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:17.331    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:17.331   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:17.331  {
00:14:17.331  "cntlid": 7,
00:14:17.331  "qid": 0,
00:14:17.331  "state": "enabled",
00:14:17.331  "thread": "nvmf_tgt_poll_group_000",
00:14:17.331  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:17.331  "listen_address": {
00:14:17.331  "trtype": "TCP",
00:14:17.331  "adrfam": "IPv4",
00:14:17.331  "traddr": "10.0.0.2",
00:14:17.331  "trsvcid": "4420"
00:14:17.331  },
00:14:17.331  "peer_address": {
00:14:17.331  "trtype": "TCP",
00:14:17.331  "adrfam": "IPv4",
00:14:17.331  "traddr": "10.0.0.1",
00:14:17.331  "trsvcid": "36186"
00:14:17.331  },
00:14:17.331  "auth": {
00:14:17.331  "state": "completed",
00:14:17.331  "digest": "sha256",
00:14:17.331  "dhgroup": "null"
00:14:17.331  }
00:14:17.331  }
00:14:17.331  ]'
00:14:17.331    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:17.588   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:17.588    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:17.588   04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:14:17.588    04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:17.588   04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:17.588   04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:17.588   04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:17.846   04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:17.846   04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:18.780  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:18.780   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:19.039   04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:19.297  
00:14:19.297    04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:19.297    04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:19.297    04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:19.555   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:19.555    04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:19.555    04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:19.555    04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:19.555    04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:19.555   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:19.555  {
00:14:19.555  "cntlid": 9,
00:14:19.555  "qid": 0,
00:14:19.555  "state": "enabled",
00:14:19.555  "thread": "nvmf_tgt_poll_group_000",
00:14:19.555  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:19.555  "listen_address": {
00:14:19.555  "trtype": "TCP",
00:14:19.555  "adrfam": "IPv4",
00:14:19.555  "traddr": "10.0.0.2",
00:14:19.555  "trsvcid": "4420"
00:14:19.555  },
00:14:19.555  "peer_address": {
00:14:19.555  "trtype": "TCP",
00:14:19.555  "adrfam": "IPv4",
00:14:19.555  "traddr": "10.0.0.1",
00:14:19.555  "trsvcid": "36214"
00:14:19.555  },
00:14:19.555  "auth": {
00:14:19.555  "state": "completed",
00:14:19.555  "digest": "sha256",
00:14:19.555  "dhgroup": "ffdhe2048"
00:14:19.555  }
00:14:19.555  }
00:14:19.555  ]'
00:14:19.555    04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:19.555   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:19.555    04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:19.814   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:14:19.814    04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:19.814   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:19.814   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:19.814   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:20.073   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:20.073   04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:21.006  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:21.006   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:21.265   04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:21.523  
00:14:21.523    04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:21.523    04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:21.524    04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:21.782   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:21.782    04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:21.782    04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:21.782    04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:21.782    04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:21.782   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:21.782  {
00:14:21.782  "cntlid": 11,
00:14:21.782  "qid": 0,
00:14:21.782  "state": "enabled",
00:14:21.782  "thread": "nvmf_tgt_poll_group_000",
00:14:21.782  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:21.782  "listen_address": {
00:14:21.782  "trtype": "TCP",
00:14:21.782  "adrfam": "IPv4",
00:14:21.782  "traddr": "10.0.0.2",
00:14:21.782  "trsvcid": "4420"
00:14:21.782  },
00:14:21.782  "peer_address": {
00:14:21.782  "trtype": "TCP",
00:14:21.782  "adrfam": "IPv4",
00:14:21.782  "traddr": "10.0.0.1",
00:14:21.782  "trsvcid": "36232"
00:14:21.782  },
00:14:21.782  "auth": {
00:14:21.782  "state": "completed",
00:14:21.782  "digest": "sha256",
00:14:21.782  "dhgroup": "ffdhe2048"
00:14:21.782  }
00:14:21.782  }
00:14:21.782  ]'
00:14:21.782    04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:21.782   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:21.782    04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:21.782   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:14:21.782    04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:22.041   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:22.041   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:22.041   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:22.299   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:22.299   04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:23.232  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:23.232   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:23.490   04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:23.748  
00:14:23.748    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:23.748    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:23.748    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:24.006   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:24.006    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:24.006    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:24.006    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:24.006    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:24.006   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:24.006  {
00:14:24.006  "cntlid": 13,
00:14:24.006  "qid": 0,
00:14:24.006  "state": "enabled",
00:14:24.006  "thread": "nvmf_tgt_poll_group_000",
00:14:24.006  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:24.006  "listen_address": {
00:14:24.006  "trtype": "TCP",
00:14:24.006  "adrfam": "IPv4",
00:14:24.006  "traddr": "10.0.0.2",
00:14:24.006  "trsvcid": "4420"
00:14:24.006  },
00:14:24.006  "peer_address": {
00:14:24.006  "trtype": "TCP",
00:14:24.006  "adrfam": "IPv4",
00:14:24.006  "traddr": "10.0.0.1",
00:14:24.006  "trsvcid": "36272"
00:14:24.006  },
00:14:24.006  "auth": {
00:14:24.006  "state": "completed",
00:14:24.006  "digest": "sha256",
00:14:24.006  "dhgroup": "ffdhe2048"
00:14:24.006  }
00:14:24.006  }
00:14:24.006  ]'
00:14:24.006    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:24.006   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:24.006    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:24.006   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:14:24.006    04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:24.263   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:24.263   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:24.263   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:24.521   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:24.521   04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:25.453  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:25.453   04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:14:25.453   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3
00:14:25.453   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:25.453   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:25.453   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:25.711   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:25.968  
00:14:25.968    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:25.968    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:25.968    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:26.225   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:26.225    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:26.225    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:26.225    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:26.225    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:26.225   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:26.225  {
00:14:26.225  "cntlid": 15,
00:14:26.225  "qid": 0,
00:14:26.225  "state": "enabled",
00:14:26.225  "thread": "nvmf_tgt_poll_group_000",
00:14:26.225  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:26.225  "listen_address": {
00:14:26.225  "trtype": "TCP",
00:14:26.225  "adrfam": "IPv4",
00:14:26.225  "traddr": "10.0.0.2",
00:14:26.225  "trsvcid": "4420"
00:14:26.225  },
00:14:26.225  "peer_address": {
00:14:26.225  "trtype": "TCP",
00:14:26.225  "adrfam": "IPv4",
00:14:26.225  "traddr": "10.0.0.1",
00:14:26.225  "trsvcid": "36292"
00:14:26.225  },
00:14:26.225  "auth": {
00:14:26.225  "state": "completed",
00:14:26.225  "digest": "sha256",
00:14:26.225  "dhgroup": "ffdhe2048"
00:14:26.225  }
00:14:26.225  }
00:14:26.225  ]'
00:14:26.225    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:26.225   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:26.225    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:26.225   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:14:26.225    04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:26.225   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:26.225   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:26.225   04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:26.481   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:26.481   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:27.409   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:27.410  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:27.410   04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:27.666   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:28.229  
00:14:28.229    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:28.229    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:28.229    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:28.487   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:28.487    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:28.487    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:28.487    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:28.487    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:28.487   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:28.487  {
00:14:28.487  "cntlid": 17,
00:14:28.487  "qid": 0,
00:14:28.487  "state": "enabled",
00:14:28.487  "thread": "nvmf_tgt_poll_group_000",
00:14:28.487  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:28.487  "listen_address": {
00:14:28.487  "trtype": "TCP",
00:14:28.487  "adrfam": "IPv4",
00:14:28.487  "traddr": "10.0.0.2",
00:14:28.487  "trsvcid": "4420"
00:14:28.487  },
00:14:28.487  "peer_address": {
00:14:28.487  "trtype": "TCP",
00:14:28.487  "adrfam": "IPv4",
00:14:28.487  "traddr": "10.0.0.1",
00:14:28.487  "trsvcid": "52272"
00:14:28.487  },
00:14:28.487  "auth": {
00:14:28.487  "state": "completed",
00:14:28.487  "digest": "sha256",
00:14:28.487  "dhgroup": "ffdhe3072"
00:14:28.487  }
00:14:28.487  }
00:14:28.487  ]'
00:14:28.487    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:28.487   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:28.487    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:28.487   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:14:28.487    04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:28.487   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:28.487   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:28.487   04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:28.750   04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:28.750   04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:29.681   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:29.682  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:29.682   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:29.682   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:29.682   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:29.682   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:29.682   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:29.682   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:29.682   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:29.939   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1
00:14:29.939   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:29.939   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:29.940   04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:30.505  
00:14:30.505    04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:30.505    04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:30.505    04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:30.762   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:30.762    04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:30.762    04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:30.762    04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:30.762    04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:30.762   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:30.762  {
00:14:30.762  "cntlid": 19,
00:14:30.762  "qid": 0,
00:14:30.762  "state": "enabled",
00:14:30.762  "thread": "nvmf_tgt_poll_group_000",
00:14:30.762  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:30.762  "listen_address": {
00:14:30.762  "trtype": "TCP",
00:14:30.762  "adrfam": "IPv4",
00:14:30.762  "traddr": "10.0.0.2",
00:14:30.762  "trsvcid": "4420"
00:14:30.762  },
00:14:30.762  "peer_address": {
00:14:30.762  "trtype": "TCP",
00:14:30.763  "adrfam": "IPv4",
00:14:30.763  "traddr": "10.0.0.1",
00:14:30.763  "trsvcid": "52308"
00:14:30.763  },
00:14:30.763  "auth": {
00:14:30.763  "state": "completed",
00:14:30.763  "digest": "sha256",
00:14:30.763  "dhgroup": "ffdhe3072"
00:14:30.763  }
00:14:30.763  }
00:14:30.763  ]'
00:14:30.763    04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:30.763   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:30.763    04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:30.763   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:14:30.763    04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:30.763   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:30.763   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:30.763   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:31.020   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:31.020   04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:31.954  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:31.954   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:32.213   04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:32.778  
00:14:32.778    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:32.778    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:32.778    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:33.034   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:33.034    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:33.034    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:33.034    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:33.034    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:33.034   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:33.034  {
00:14:33.034  "cntlid": 21,
00:14:33.034  "qid": 0,
00:14:33.034  "state": "enabled",
00:14:33.034  "thread": "nvmf_tgt_poll_group_000",
00:14:33.034  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:33.034  "listen_address": {
00:14:33.034  "trtype": "TCP",
00:14:33.034  "adrfam": "IPv4",
00:14:33.034  "traddr": "10.0.0.2",
00:14:33.034  "trsvcid": "4420"
00:14:33.035  },
00:14:33.035  "peer_address": {
00:14:33.035  "trtype": "TCP",
00:14:33.035  "adrfam": "IPv4",
00:14:33.035  "traddr": "10.0.0.1",
00:14:33.035  "trsvcid": "52340"
00:14:33.035  },
00:14:33.035  "auth": {
00:14:33.035  "state": "completed",
00:14:33.035  "digest": "sha256",
00:14:33.035  "dhgroup": "ffdhe3072"
00:14:33.035  }
00:14:33.035  }
00:14:33.035  ]'
00:14:33.035    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:33.035   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:33.035    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:33.035   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:14:33.035    04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:33.035   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:33.035   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:33.035   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:33.291   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:33.291   04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:34.221  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:34.221   04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:34.478   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:35.053  
00:14:35.053    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:35.053    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:35.053    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:35.310   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:35.310    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:35.310    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:35.310    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:35.310    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:35.310   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:35.310  {
00:14:35.310  "cntlid": 23,
00:14:35.310  "qid": 0,
00:14:35.310  "state": "enabled",
00:14:35.310  "thread": "nvmf_tgt_poll_group_000",
00:14:35.310  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:35.310  "listen_address": {
00:14:35.310  "trtype": "TCP",
00:14:35.310  "adrfam": "IPv4",
00:14:35.310  "traddr": "10.0.0.2",
00:14:35.310  "trsvcid": "4420"
00:14:35.310  },
00:14:35.310  "peer_address": {
00:14:35.310  "trtype": "TCP",
00:14:35.310  "adrfam": "IPv4",
00:14:35.310  "traddr": "10.0.0.1",
00:14:35.310  "trsvcid": "52364"
00:14:35.310  },
00:14:35.310  "auth": {
00:14:35.310  "state": "completed",
00:14:35.310  "digest": "sha256",
00:14:35.310  "dhgroup": "ffdhe3072"
00:14:35.310  }
00:14:35.310  }
00:14:35.310  ]'
00:14:35.310    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:35.310   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:35.310    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:35.310   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:14:35.310    04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:35.310   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:35.310   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:35.310   04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:35.567   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:35.567   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:36.497  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:36.497   04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:36.758   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0
00:14:36.758   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:36.758   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:36.758   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:14:36.758   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:14:36.758   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:36.758   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:36.759   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:36.759   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:36.759   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:36.759   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:36.759   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:36.759   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:37.324  
00:14:37.324    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:37.324    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:37.324    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:37.582   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:37.582    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:37.582    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:37.582    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:37.582    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:37.582   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:37.582  {
00:14:37.582  "cntlid": 25,
00:14:37.582  "qid": 0,
00:14:37.582  "state": "enabled",
00:14:37.582  "thread": "nvmf_tgt_poll_group_000",
00:14:37.582  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:37.582  "listen_address": {
00:14:37.582  "trtype": "TCP",
00:14:37.582  "adrfam": "IPv4",
00:14:37.582  "traddr": "10.0.0.2",
00:14:37.582  "trsvcid": "4420"
00:14:37.582  },
00:14:37.582  "peer_address": {
00:14:37.582  "trtype": "TCP",
00:14:37.582  "adrfam": "IPv4",
00:14:37.582  "traddr": "10.0.0.1",
00:14:37.582  "trsvcid": "58030"
00:14:37.582  },
00:14:37.582  "auth": {
00:14:37.582  "state": "completed",
00:14:37.582  "digest": "sha256",
00:14:37.582  "dhgroup": "ffdhe4096"
00:14:37.582  }
00:14:37.582  }
00:14:37.582  ]'
00:14:37.582    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:37.582   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:37.582    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:37.582   04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:14:37.582    04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:37.582   04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:37.582   04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:37.582   04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:37.840   04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:37.840   04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:38.773  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:38.773   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:39.031   04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:39.596  
00:14:39.596    04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:39.596    04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:39.596    04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:39.854   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:39.855    04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:39.855    04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.855    04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:39.855    04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.855   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:39.855  {
00:14:39.855  "cntlid": 27,
00:14:39.855  "qid": 0,
00:14:39.855  "state": "enabled",
00:14:39.855  "thread": "nvmf_tgt_poll_group_000",
00:14:39.855  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:39.855  "listen_address": {
00:14:39.855  "trtype": "TCP",
00:14:39.855  "adrfam": "IPv4",
00:14:39.855  "traddr": "10.0.0.2",
00:14:39.855  "trsvcid": "4420"
00:14:39.855  },
00:14:39.855  "peer_address": {
00:14:39.855  "trtype": "TCP",
00:14:39.855  "adrfam": "IPv4",
00:14:39.855  "traddr": "10.0.0.1",
00:14:39.855  "trsvcid": "58056"
00:14:39.855  },
00:14:39.855  "auth": {
00:14:39.855  "state": "completed",
00:14:39.855  "digest": "sha256",
00:14:39.855  "dhgroup": "ffdhe4096"
00:14:39.855  }
00:14:39.855  }
00:14:39.855  ]'
00:14:39.855    04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:39.855   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:39.855    04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:39.855   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:14:39.855    04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:39.855   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:39.855   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:39.855   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:40.113   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:40.113   04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:41.046   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:41.046  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:41.046   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:41.047   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.047   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:41.047   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.047   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:41.047   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:41.047   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:41.304   04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:41.870  
00:14:41.871    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:41.871    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:41.871    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:42.129   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:42.129    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:42.129    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.129    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:42.129    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.129   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:42.129  {
00:14:42.129  "cntlid": 29,
00:14:42.129  "qid": 0,
00:14:42.129  "state": "enabled",
00:14:42.129  "thread": "nvmf_tgt_poll_group_000",
00:14:42.129  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:42.129  "listen_address": {
00:14:42.129  "trtype": "TCP",
00:14:42.129  "adrfam": "IPv4",
00:14:42.129  "traddr": "10.0.0.2",
00:14:42.129  "trsvcid": "4420"
00:14:42.129  },
00:14:42.129  "peer_address": {
00:14:42.129  "trtype": "TCP",
00:14:42.129  "adrfam": "IPv4",
00:14:42.129  "traddr": "10.0.0.1",
00:14:42.129  "trsvcid": "58090"
00:14:42.129  },
00:14:42.129  "auth": {
00:14:42.129  "state": "completed",
00:14:42.129  "digest": "sha256",
00:14:42.129  "dhgroup": "ffdhe4096"
00:14:42.129  }
00:14:42.129  }
00:14:42.129  ]'
00:14:42.129    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:42.129   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:42.129    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:42.129   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:14:42.129    04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:42.129   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:42.129   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:42.129   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:42.386   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:42.386   04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:43.317  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:43.317   04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:43.574   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:44.139  
00:14:44.139    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:44.139    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:44.139    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:44.395   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:44.395    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:44.395    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:44.395    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:44.395    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:44.395   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:44.395  {
00:14:44.395  "cntlid": 31,
00:14:44.395  "qid": 0,
00:14:44.395  "state": "enabled",
00:14:44.395  "thread": "nvmf_tgt_poll_group_000",
00:14:44.395  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:44.395  "listen_address": {
00:14:44.395  "trtype": "TCP",
00:14:44.395  "adrfam": "IPv4",
00:14:44.395  "traddr": "10.0.0.2",
00:14:44.395  "trsvcid": "4420"
00:14:44.395  },
00:14:44.395  "peer_address": {
00:14:44.395  "trtype": "TCP",
00:14:44.395  "adrfam": "IPv4",
00:14:44.395  "traddr": "10.0.0.1",
00:14:44.395  "trsvcid": "58100"
00:14:44.395  },
00:14:44.395  "auth": {
00:14:44.395  "state": "completed",
00:14:44.395  "digest": "sha256",
00:14:44.395  "dhgroup": "ffdhe4096"
00:14:44.395  }
00:14:44.395  }
00:14:44.395  ]'
00:14:44.395    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:44.395   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:44.395    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:44.395   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:14:44.395    04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:44.395   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:44.395   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:44.395   04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:44.958   04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:44.958   04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:45.521   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:45.777  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:45.777   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:46.034   04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:46.599  
00:14:46.599    04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:46.599    04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:46.599    04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:46.857   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:46.857    04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:46.857    04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:46.857    04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:46.857    04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:46.857   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:46.857  {
00:14:46.857  "cntlid": 33,
00:14:46.857  "qid": 0,
00:14:46.857  "state": "enabled",
00:14:46.857  "thread": "nvmf_tgt_poll_group_000",
00:14:46.857  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:46.857  "listen_address": {
00:14:46.857  "trtype": "TCP",
00:14:46.857  "adrfam": "IPv4",
00:14:46.857  "traddr": "10.0.0.2",
00:14:46.857  "trsvcid": "4420"
00:14:46.857  },
00:14:46.857  "peer_address": {
00:14:46.857  "trtype": "TCP",
00:14:46.857  "adrfam": "IPv4",
00:14:46.857  "traddr": "10.0.0.1",
00:14:46.857  "trsvcid": "58130"
00:14:46.857  },
00:14:46.857  "auth": {
00:14:46.857  "state": "completed",
00:14:46.857  "digest": "sha256",
00:14:46.857  "dhgroup": "ffdhe6144"
00:14:46.857  }
00:14:46.857  }
00:14:46.857  ]'
00:14:46.857    04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:46.857   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:46.857    04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:46.857   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:14:46.857    04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:46.857   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:46.857   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:46.857   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:47.115   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:47.115   04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:48.045  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:48.045   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:48.304   04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:48.869  
00:14:48.869    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:48.869    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:48.869    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:49.434   04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:49.434    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:49.434    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:49.434    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:49.434    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:49.434   04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:49.434  {
00:14:49.434  "cntlid": 35,
00:14:49.434  "qid": 0,
00:14:49.434  "state": "enabled",
00:14:49.434  "thread": "nvmf_tgt_poll_group_000",
00:14:49.434  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:49.434  "listen_address": {
00:14:49.435  "trtype": "TCP",
00:14:49.435  "adrfam": "IPv4",
00:14:49.435  "traddr": "10.0.0.2",
00:14:49.435  "trsvcid": "4420"
00:14:49.435  },
00:14:49.435  "peer_address": {
00:14:49.435  "trtype": "TCP",
00:14:49.435  "adrfam": "IPv4",
00:14:49.435  "traddr": "10.0.0.1",
00:14:49.435  "trsvcid": "59210"
00:14:49.435  },
00:14:49.435  "auth": {
00:14:49.435  "state": "completed",
00:14:49.435  "digest": "sha256",
00:14:49.435  "dhgroup": "ffdhe6144"
00:14:49.435  }
00:14:49.435  }
00:14:49.435  ]'
00:14:49.435    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:49.435   04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:49.435    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:49.435   04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:14:49.435    04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:49.435   04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:49.435   04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:49.435   04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:49.692   04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:49.692   04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:50.625  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:50.625   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:50.883   04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:14:51.448  
00:14:51.448    04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:51.448    04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:51.448    04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:51.706   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:51.706    04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:51.706    04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:51.706    04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:51.706    04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:51.706   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:51.706  {
00:14:51.706  "cntlid": 37,
00:14:51.706  "qid": 0,
00:14:51.706  "state": "enabled",
00:14:51.706  "thread": "nvmf_tgt_poll_group_000",
00:14:51.706  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:51.706  "listen_address": {
00:14:51.706  "trtype": "TCP",
00:14:51.706  "adrfam": "IPv4",
00:14:51.706  "traddr": "10.0.0.2",
00:14:51.706  "trsvcid": "4420"
00:14:51.706  },
00:14:51.706  "peer_address": {
00:14:51.706  "trtype": "TCP",
00:14:51.706  "adrfam": "IPv4",
00:14:51.706  "traddr": "10.0.0.1",
00:14:51.706  "trsvcid": "59244"
00:14:51.706  },
00:14:51.706  "auth": {
00:14:51.706  "state": "completed",
00:14:51.706  "digest": "sha256",
00:14:51.706  "dhgroup": "ffdhe6144"
00:14:51.706  }
00:14:51.706  }
00:14:51.706  ]'
00:14:51.706    04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:51.706   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:51.706    04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:51.706   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:14:51.706    04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:51.963   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:51.963   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:51.963   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:52.221   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:52.221   04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:53.170  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:53.170   04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:14:53.734  
00:14:53.734    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:53.734    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:53.734    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:53.991   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:53.991    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:53.991    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:53.992    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:54.248    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:54.248   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:54.248  {
00:14:54.248  "cntlid": 39,
00:14:54.248  "qid": 0,
00:14:54.248  "state": "enabled",
00:14:54.248  "thread": "nvmf_tgt_poll_group_000",
00:14:54.248  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:54.248  "listen_address": {
00:14:54.248  "trtype": "TCP",
00:14:54.248  "adrfam": "IPv4",
00:14:54.248  "traddr": "10.0.0.2",
00:14:54.248  "trsvcid": "4420"
00:14:54.248  },
00:14:54.248  "peer_address": {
00:14:54.248  "trtype": "TCP",
00:14:54.248  "adrfam": "IPv4",
00:14:54.248  "traddr": "10.0.0.1",
00:14:54.248  "trsvcid": "59266"
00:14:54.248  },
00:14:54.248  "auth": {
00:14:54.248  "state": "completed",
00:14:54.248  "digest": "sha256",
00:14:54.248  "dhgroup": "ffdhe6144"
00:14:54.248  }
00:14:54.248  }
00:14:54.248  ]'
00:14:54.248    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:54.248   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:54.248    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:54.248   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:14:54.248    04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:54.248   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:54.248   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:54.248   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:54.505   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:54.505   04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:55.437  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:14:55.437   04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:14:55.694   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0
00:14:55.694   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:55.694   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:55.694   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:14:55.694   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:14:55.694   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:55.695   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:55.695   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:55.695   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:55.695   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:55.695   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:55.695   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:55.695   04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:14:56.635  
00:14:56.635    04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:56.635    04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:56.636    04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:56.893   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:56.893    04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:56.893    04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.893    04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:56.893    04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.893   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:56.893  {
00:14:56.893  "cntlid": 41,
00:14:56.893  "qid": 0,
00:14:56.893  "state": "enabled",
00:14:56.893  "thread": "nvmf_tgt_poll_group_000",
00:14:56.893  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:56.893  "listen_address": {
00:14:56.893  "trtype": "TCP",
00:14:56.893  "adrfam": "IPv4",
00:14:56.893  "traddr": "10.0.0.2",
00:14:56.893  "trsvcid": "4420"
00:14:56.893  },
00:14:56.893  "peer_address": {
00:14:56.893  "trtype": "TCP",
00:14:56.893  "adrfam": "IPv4",
00:14:56.893  "traddr": "10.0.0.1",
00:14:56.893  "trsvcid": "59296"
00:14:56.893  },
00:14:56.893  "auth": {
00:14:56.893  "state": "completed",
00:14:56.893  "digest": "sha256",
00:14:56.893  "dhgroup": "ffdhe8192"
00:14:56.893  }
00:14:56.893  }
00:14:56.893  ]'
00:14:56.893    04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:56.893   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:56.893    04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:56.893   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:14:56.893    04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:56.893   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:56.893   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:56.893   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:57.151   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:57.151   04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:14:58.084  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:14:58.084   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:58.650   04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:14:59.215  
00:14:59.215    04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:14:59.215    04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:14:59.215    04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:14:59.472   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:14:59.472    04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:14:59.472    04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:59.472    04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:14:59.472    04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:59.472   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:14:59.472  {
00:14:59.472  "cntlid": 43,
00:14:59.472  "qid": 0,
00:14:59.472  "state": "enabled",
00:14:59.472  "thread": "nvmf_tgt_poll_group_000",
00:14:59.472  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:14:59.472  "listen_address": {
00:14:59.472  "trtype": "TCP",
00:14:59.472  "adrfam": "IPv4",
00:14:59.472  "traddr": "10.0.0.2",
00:14:59.472  "trsvcid": "4420"
00:14:59.472  },
00:14:59.473  "peer_address": {
00:14:59.473  "trtype": "TCP",
00:14:59.473  "adrfam": "IPv4",
00:14:59.473  "traddr": "10.0.0.1",
00:14:59.473  "trsvcid": "57598"
00:14:59.473  },
00:14:59.473  "auth": {
00:14:59.473  "state": "completed",
00:14:59.473  "digest": "sha256",
00:14:59.473  "dhgroup": "ffdhe8192"
00:14:59.473  }
00:14:59.473  }
00:14:59.473  ]'
00:14:59.473    04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:14:59.730   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:14:59.730    04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:14:59.730   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:14:59.730    04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:14:59.730   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:14:59.730   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:14:59.730   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:14:59.988   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:14:59.988   04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:00.920  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:15:00.920   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:01.178   04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:02.117  
00:15:02.117    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:02.117    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:02.117    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:02.374   04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:02.374    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:02.374    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:02.374    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:02.374    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:02.374   04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:02.374  {
00:15:02.374  "cntlid": 45,
00:15:02.374  "qid": 0,
00:15:02.374  "state": "enabled",
00:15:02.374  "thread": "nvmf_tgt_poll_group_000",
00:15:02.374  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:02.374  "listen_address": {
00:15:02.374  "trtype": "TCP",
00:15:02.374  "adrfam": "IPv4",
00:15:02.374  "traddr": "10.0.0.2",
00:15:02.374  "trsvcid": "4420"
00:15:02.374  },
00:15:02.374  "peer_address": {
00:15:02.374  "trtype": "TCP",
00:15:02.374  "adrfam": "IPv4",
00:15:02.374  "traddr": "10.0.0.1",
00:15:02.374  "trsvcid": "57614"
00:15:02.374  },
00:15:02.374  "auth": {
00:15:02.374  "state": "completed",
00:15:02.374  "digest": "sha256",
00:15:02.374  "dhgroup": "ffdhe8192"
00:15:02.374  }
00:15:02.374  }
00:15:02.374  ]'
00:15:02.374    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:02.374   04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:15:02.374    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:02.374   04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:15:02.374    04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:02.374   04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:02.374   04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:02.374   04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:02.631   04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:02.631   04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:03.561   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:03.561  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:03.561   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:03.562   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:03.562   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:03.562   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:03.562   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:03.562   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:15:03.562   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:03.819   04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:04.750  
00:15:04.750    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:04.750    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:04.750    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:05.007   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:05.007    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:05.007    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:05.007    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:05.007    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:05.007   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:05.007  {
00:15:05.007  "cntlid": 47,
00:15:05.007  "qid": 0,
00:15:05.007  "state": "enabled",
00:15:05.007  "thread": "nvmf_tgt_poll_group_000",
00:15:05.007  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:05.007  "listen_address": {
00:15:05.007  "trtype": "TCP",
00:15:05.007  "adrfam": "IPv4",
00:15:05.007  "traddr": "10.0.0.2",
00:15:05.007  "trsvcid": "4420"
00:15:05.007  },
00:15:05.007  "peer_address": {
00:15:05.007  "trtype": "TCP",
00:15:05.007  "adrfam": "IPv4",
00:15:05.007  "traddr": "10.0.0.1",
00:15:05.007  "trsvcid": "57636"
00:15:05.007  },
00:15:05.007  "auth": {
00:15:05.007  "state": "completed",
00:15:05.007  "digest": "sha256",
00:15:05.007  "dhgroup": "ffdhe8192"
00:15:05.007  }
00:15:05.007  }
00:15:05.007  ]'
00:15:05.007    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:05.007   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:15:05.008    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:05.008   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:15:05.008    04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:05.008   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:05.008   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:05.008   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:05.573   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:05.573   04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:06.508  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:06.508   04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:06.508   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:07.072  
00:15:07.072    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:07.072    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:07.072    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:07.329   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:07.329    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:07.329    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:07.329    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:07.329    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:07.329   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:07.329  {
00:15:07.329  "cntlid": 49,
00:15:07.329  "qid": 0,
00:15:07.329  "state": "enabled",
00:15:07.329  "thread": "nvmf_tgt_poll_group_000",
00:15:07.329  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:07.329  "listen_address": {
00:15:07.329  "trtype": "TCP",
00:15:07.329  "adrfam": "IPv4",
00:15:07.329  "traddr": "10.0.0.2",
00:15:07.329  "trsvcid": "4420"
00:15:07.329  },
00:15:07.329  "peer_address": {
00:15:07.329  "trtype": "TCP",
00:15:07.329  "adrfam": "IPv4",
00:15:07.329  "traddr": "10.0.0.1",
00:15:07.329  "trsvcid": "57676"
00:15:07.329  },
00:15:07.329  "auth": {
00:15:07.329  "state": "completed",
00:15:07.329  "digest": "sha384",
00:15:07.329  "dhgroup": "null"
00:15:07.329  }
00:15:07.329  }
00:15:07.329  ]'
00:15:07.329    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:07.329   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:07.329    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:07.329   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:15:07.329    04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:07.329   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:07.329   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:07.329   04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:07.586   04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:07.586   04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:08.519   04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:08.519  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:08.519   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:08.519   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.519   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:08.519   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.519   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:08.519   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:08.519   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:08.778   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:09.345  
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:09.345   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:09.345   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:09.345  {
00:15:09.345  "cntlid": 51,
00:15:09.345  "qid": 0,
00:15:09.345  "state": "enabled",
00:15:09.345  "thread": "nvmf_tgt_poll_group_000",
00:15:09.345  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:09.345  "listen_address": {
00:15:09.345  "trtype": "TCP",
00:15:09.345  "adrfam": "IPv4",
00:15:09.345  "traddr": "10.0.0.2",
00:15:09.345  "trsvcid": "4420"
00:15:09.345  },
00:15:09.345  "peer_address": {
00:15:09.345  "trtype": "TCP",
00:15:09.345  "adrfam": "IPv4",
00:15:09.345  "traddr": "10.0.0.1",
00:15:09.345  "trsvcid": "52930"
00:15:09.345  },
00:15:09.345  "auth": {
00:15:09.345  "state": "completed",
00:15:09.345  "digest": "sha384",
00:15:09.345  "dhgroup": "null"
00:15:09.345  }
00:15:09.345  }
00:15:09.345  ]'
00:15:09.345    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:09.604   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:09.604    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:09.604   04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:15:09.604    04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:09.604   04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:09.604   04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:09.604   04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:09.862   04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:09.862   04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:10.808  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:10.808   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:11.064   04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:11.319  
00:15:11.319    04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:11.319    04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:11.319    04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:11.575   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:11.575    04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:11.575    04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.575    04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:11.832    04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.832   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:11.832  {
00:15:11.832  "cntlid": 53,
00:15:11.832  "qid": 0,
00:15:11.832  "state": "enabled",
00:15:11.832  "thread": "nvmf_tgt_poll_group_000",
00:15:11.832  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:11.832  "listen_address": {
00:15:11.832  "trtype": "TCP",
00:15:11.832  "adrfam": "IPv4",
00:15:11.832  "traddr": "10.0.0.2",
00:15:11.832  "trsvcid": "4420"
00:15:11.832  },
00:15:11.832  "peer_address": {
00:15:11.832  "trtype": "TCP",
00:15:11.832  "adrfam": "IPv4",
00:15:11.832  "traddr": "10.0.0.1",
00:15:11.832  "trsvcid": "52956"
00:15:11.832  },
00:15:11.832  "auth": {
00:15:11.832  "state": "completed",
00:15:11.832  "digest": "sha384",
00:15:11.832  "dhgroup": "null"
00:15:11.832  }
00:15:11.832  }
00:15:11.832  ]'
00:15:11.832    04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:11.832   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:11.832    04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:11.832   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:15:11.832    04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:11.832   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:11.832   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:11.832   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:12.088   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:12.089   04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:13.018  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:13.018   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:13.274   04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:13.531  
00:15:13.531    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:13.531    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:13.531    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:14.094   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:14.094    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:14.094    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.094    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:14.094    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:14.094   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:14.094  {
00:15:14.094  "cntlid": 55,
00:15:14.094  "qid": 0,
00:15:14.094  "state": "enabled",
00:15:14.094  "thread": "nvmf_tgt_poll_group_000",
00:15:14.094  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:14.094  "listen_address": {
00:15:14.094  "trtype": "TCP",
00:15:14.094  "adrfam": "IPv4",
00:15:14.094  "traddr": "10.0.0.2",
00:15:14.094  "trsvcid": "4420"
00:15:14.094  },
00:15:14.094  "peer_address": {
00:15:14.094  "trtype": "TCP",
00:15:14.094  "adrfam": "IPv4",
00:15:14.094  "traddr": "10.0.0.1",
00:15:14.094  "trsvcid": "52982"
00:15:14.094  },
00:15:14.094  "auth": {
00:15:14.094  "state": "completed",
00:15:14.094  "digest": "sha384",
00:15:14.094  "dhgroup": "null"
00:15:14.094  }
00:15:14.094  }
00:15:14.094  ]'
00:15:14.094    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:14.094   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:14.094    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:14.094   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:15:14.094    04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:14.094   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:14.094   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:14.094   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:14.352   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:14.352   04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:15.285  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:15.285   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:15.543   04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:15.801  
00:15:15.801    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:15.801    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:15.801    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:16.060   04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:16.060    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:16.060    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:16.060    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:16.060    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:16.060   04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:16.060  {
00:15:16.060  "cntlid": 57,
00:15:16.060  "qid": 0,
00:15:16.060  "state": "enabled",
00:15:16.060  "thread": "nvmf_tgt_poll_group_000",
00:15:16.060  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:16.060  "listen_address": {
00:15:16.060  "trtype": "TCP",
00:15:16.060  "adrfam": "IPv4",
00:15:16.060  "traddr": "10.0.0.2",
00:15:16.060  "trsvcid": "4420"
00:15:16.060  },
00:15:16.060  "peer_address": {
00:15:16.060  "trtype": "TCP",
00:15:16.060  "adrfam": "IPv4",
00:15:16.060  "traddr": "10.0.0.1",
00:15:16.060  "trsvcid": "52998"
00:15:16.060  },
00:15:16.060  "auth": {
00:15:16.060  "state": "completed",
00:15:16.060  "digest": "sha384",
00:15:16.060  "dhgroup": "ffdhe2048"
00:15:16.060  }
00:15:16.060  }
00:15:16.060  ]'
00:15:16.060    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:16.318   04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:16.318    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:16.318   04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:15:16.318    04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:16.318   04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:16.318   04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:16.318   04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:16.576   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:16.576   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:17.513  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:17.513   04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.770   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:17.771   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:17.771   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:18.028  
00:15:18.028    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:18.028    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:18.028    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:18.285   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:18.286    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:18.286    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:18.286    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:18.286    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:18.286   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:18.286  {
00:15:18.286  "cntlid": 59,
00:15:18.286  "qid": 0,
00:15:18.286  "state": "enabled",
00:15:18.286  "thread": "nvmf_tgt_poll_group_000",
00:15:18.286  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:18.286  "listen_address": {
00:15:18.286  "trtype": "TCP",
00:15:18.286  "adrfam": "IPv4",
00:15:18.286  "traddr": "10.0.0.2",
00:15:18.286  "trsvcid": "4420"
00:15:18.286  },
00:15:18.286  "peer_address": {
00:15:18.286  "trtype": "TCP",
00:15:18.286  "adrfam": "IPv4",
00:15:18.286  "traddr": "10.0.0.1",
00:15:18.286  "trsvcid": "55560"
00:15:18.286  },
00:15:18.286  "auth": {
00:15:18.286  "state": "completed",
00:15:18.286  "digest": "sha384",
00:15:18.286  "dhgroup": "ffdhe2048"
00:15:18.286  }
00:15:18.286  }
00:15:18.286  ]'
00:15:18.286    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:18.543   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:18.543    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:18.543   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:15:18.543    04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:18.543   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:18.543   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:18.543   04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:18.800   04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:18.800   04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:19.733  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:19.733   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:19.990   04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:20.247  
00:15:20.247    04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:20.247    04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:20.247    04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:20.504   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:20.504    04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:20.504    04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:20.505    04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:20.762    04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:20.762   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:20.762  {
00:15:20.762  "cntlid": 61,
00:15:20.762  "qid": 0,
00:15:20.763  "state": "enabled",
00:15:20.763  "thread": "nvmf_tgt_poll_group_000",
00:15:20.763  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:20.763  "listen_address": {
00:15:20.763  "trtype": "TCP",
00:15:20.763  "adrfam": "IPv4",
00:15:20.763  "traddr": "10.0.0.2",
00:15:20.763  "trsvcid": "4420"
00:15:20.763  },
00:15:20.763  "peer_address": {
00:15:20.763  "trtype": "TCP",
00:15:20.763  "adrfam": "IPv4",
00:15:20.763  "traddr": "10.0.0.1",
00:15:20.763  "trsvcid": "55596"
00:15:20.763  },
00:15:20.763  "auth": {
00:15:20.763  "state": "completed",
00:15:20.763  "digest": "sha384",
00:15:20.763  "dhgroup": "ffdhe2048"
00:15:20.763  }
00:15:20.763  }
00:15:20.763  ]'
00:15:20.763    04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:20.763   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:20.763    04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:20.763   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:15:20.763    04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:20.763   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:20.763   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:20.763   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:21.020   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:21.020   04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:21.975  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:21.975   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:22.232   04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:22.490  
00:15:22.748    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:22.748    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:22.748    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:23.004   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:23.004    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:23.004    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:23.004    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:23.004    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:23.004   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:23.004  {
00:15:23.004  "cntlid": 63,
00:15:23.004  "qid": 0,
00:15:23.004  "state": "enabled",
00:15:23.004  "thread": "nvmf_tgt_poll_group_000",
00:15:23.004  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:23.004  "listen_address": {
00:15:23.004  "trtype": "TCP",
00:15:23.004  "adrfam": "IPv4",
00:15:23.004  "traddr": "10.0.0.2",
00:15:23.004  "trsvcid": "4420"
00:15:23.004  },
00:15:23.004  "peer_address": {
00:15:23.004  "trtype": "TCP",
00:15:23.004  "adrfam": "IPv4",
00:15:23.004  "traddr": "10.0.0.1",
00:15:23.004  "trsvcid": "55632"
00:15:23.004  },
00:15:23.004  "auth": {
00:15:23.004  "state": "completed",
00:15:23.004  "digest": "sha384",
00:15:23.004  "dhgroup": "ffdhe2048"
00:15:23.004  }
00:15:23.004  }
00:15:23.004  ]'
00:15:23.004    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:23.004   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:23.004    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:23.004   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:15:23.004    04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:23.004   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:23.004   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:23.004   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:23.261   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:23.261   04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:24.194  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:24.194   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:24.451   04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:25.018  
00:15:25.018    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:25.018    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:25.018    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:25.275   04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:25.275    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:25.275    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:25.275    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:25.275    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.275   04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:25.275  {
00:15:25.275  "cntlid": 65,
00:15:25.275  "qid": 0,
00:15:25.275  "state": "enabled",
00:15:25.275  "thread": "nvmf_tgt_poll_group_000",
00:15:25.275  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:25.275  "listen_address": {
00:15:25.275  "trtype": "TCP",
00:15:25.275  "adrfam": "IPv4",
00:15:25.275  "traddr": "10.0.0.2",
00:15:25.275  "trsvcid": "4420"
00:15:25.275  },
00:15:25.275  "peer_address": {
00:15:25.275  "trtype": "TCP",
00:15:25.275  "adrfam": "IPv4",
00:15:25.275  "traddr": "10.0.0.1",
00:15:25.275  "trsvcid": "55648"
00:15:25.275  },
00:15:25.275  "auth": {
00:15:25.275  "state": "completed",
00:15:25.275  "digest": "sha384",
00:15:25.275  "dhgroup": "ffdhe3072"
00:15:25.275  }
00:15:25.275  }
00:15:25.275  ]'
00:15:25.275    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:25.275   04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:25.275    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:25.275   04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:15:25.275    04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:25.275   04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:25.275   04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:25.275   04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:25.533   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:25.533   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:26.465  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:26.465   04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:26.722   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1
00:15:26.722   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:26.722   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:26.722   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:15:26.722   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:15:26.722   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:26.723   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:26.723   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:26.723   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:26.723   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:26.723   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:26.723   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:26.723   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:27.286  
00:15:27.286    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:27.286    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:27.286    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:27.286   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:27.286    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:27.286    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:27.286    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:27.286    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:27.286   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:27.286  {
00:15:27.286  "cntlid": 67,
00:15:27.286  "qid": 0,
00:15:27.286  "state": "enabled",
00:15:27.286  "thread": "nvmf_tgt_poll_group_000",
00:15:27.286  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:27.286  "listen_address": {
00:15:27.286  "trtype": "TCP",
00:15:27.286  "adrfam": "IPv4",
00:15:27.286  "traddr": "10.0.0.2",
00:15:27.286  "trsvcid": "4420"
00:15:27.286  },
00:15:27.287  "peer_address": {
00:15:27.287  "trtype": "TCP",
00:15:27.287  "adrfam": "IPv4",
00:15:27.287  "traddr": "10.0.0.1",
00:15:27.287  "trsvcid": "54410"
00:15:27.287  },
00:15:27.287  "auth": {
00:15:27.287  "state": "completed",
00:15:27.287  "digest": "sha384",
00:15:27.287  "dhgroup": "ffdhe3072"
00:15:27.287  }
00:15:27.287  }
00:15:27.287  ]'
00:15:27.287    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:27.545   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:27.545    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:27.545   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:15:27.545    04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:27.545   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:27.545   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:27.545   04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:27.802   04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:27.802   04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:28.734  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:28.734   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:28.992   04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:29.250  
00:15:29.250    04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:29.250    04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:29.250    04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:29.508   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:29.508    04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:29.508    04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:29.508    04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:29.508    04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:29.508   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:29.508  {
00:15:29.508  "cntlid": 69,
00:15:29.508  "qid": 0,
00:15:29.508  "state": "enabled",
00:15:29.508  "thread": "nvmf_tgt_poll_group_000",
00:15:29.508  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:29.508  "listen_address": {
00:15:29.508  "trtype": "TCP",
00:15:29.508  "adrfam": "IPv4",
00:15:29.508  "traddr": "10.0.0.2",
00:15:29.508  "trsvcid": "4420"
00:15:29.508  },
00:15:29.508  "peer_address": {
00:15:29.508  "trtype": "TCP",
00:15:29.508  "adrfam": "IPv4",
00:15:29.508  "traddr": "10.0.0.1",
00:15:29.508  "trsvcid": "54436"
00:15:29.508  },
00:15:29.508  "auth": {
00:15:29.508  "state": "completed",
00:15:29.508  "digest": "sha384",
00:15:29.508  "dhgroup": "ffdhe3072"
00:15:29.508  }
00:15:29.508  }
00:15:29.508  ]'
00:15:29.508    04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:29.508   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:29.508    04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:29.766   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:15:29.766    04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:29.766   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:29.766   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:29.766   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:30.022   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:30.022   04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:30.952  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:30.952   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:31.208   04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:31.465  
00:15:31.465    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:31.465    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:31.465    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:31.722   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:31.722    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:31.722    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.722    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:31.979    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.979   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:31.979  {
00:15:31.979  "cntlid": 71,
00:15:31.979  "qid": 0,
00:15:31.979  "state": "enabled",
00:15:31.979  "thread": "nvmf_tgt_poll_group_000",
00:15:31.979  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:31.979  "listen_address": {
00:15:31.979  "trtype": "TCP",
00:15:31.979  "adrfam": "IPv4",
00:15:31.979  "traddr": "10.0.0.2",
00:15:31.979  "trsvcid": "4420"
00:15:31.979  },
00:15:31.979  "peer_address": {
00:15:31.979  "trtype": "TCP",
00:15:31.979  "adrfam": "IPv4",
00:15:31.979  "traddr": "10.0.0.1",
00:15:31.979  "trsvcid": "54470"
00:15:31.979  },
00:15:31.979  "auth": {
00:15:31.979  "state": "completed",
00:15:31.979  "digest": "sha384",
00:15:31.979  "dhgroup": "ffdhe3072"
00:15:31.979  }
00:15:31.979  }
00:15:31.979  ]'
00:15:31.979    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:31.980   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:31.980    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:31.980   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:15:31.980    04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:31.980   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:31.980   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:31.980   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:32.236   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:32.236   04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:33.169  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:33.169   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:33.426   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0
00:15:33.426   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:33.426   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:33.426   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:15:33.426   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:15:33.426   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:33.426   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:33.427   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:33.427   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:33.427   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:33.427   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:33.427   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:33.427   04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:33.992  
00:15:33.992    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:33.992    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:33.992    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:34.250   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:34.250    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:34.250    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.250    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:34.250    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.250   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:34.250  {
00:15:34.250  "cntlid": 73,
00:15:34.250  "qid": 0,
00:15:34.250  "state": "enabled",
00:15:34.250  "thread": "nvmf_tgt_poll_group_000",
00:15:34.250  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:34.250  "listen_address": {
00:15:34.250  "trtype": "TCP",
00:15:34.250  "adrfam": "IPv4",
00:15:34.250  "traddr": "10.0.0.2",
00:15:34.250  "trsvcid": "4420"
00:15:34.250  },
00:15:34.250  "peer_address": {
00:15:34.250  "trtype": "TCP",
00:15:34.250  "adrfam": "IPv4",
00:15:34.250  "traddr": "10.0.0.1",
00:15:34.250  "trsvcid": "54502"
00:15:34.250  },
00:15:34.250  "auth": {
00:15:34.250  "state": "completed",
00:15:34.250  "digest": "sha384",
00:15:34.250  "dhgroup": "ffdhe4096"
00:15:34.250  }
00:15:34.250  }
00:15:34.250  ]'
00:15:34.250    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:34.250   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:34.250    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:34.250   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:15:34.250    04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:34.250   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:34.250   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:34.250   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:34.509   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:34.509   04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:35.443  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:35.443   04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:35.702   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:36.286  
00:15:36.286    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:36.286    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:36.286    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:36.286   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:36.286    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:36.286    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:36.286    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:36.544    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:36.544   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:36.544  {
00:15:36.544  "cntlid": 75,
00:15:36.544  "qid": 0,
00:15:36.544  "state": "enabled",
00:15:36.544  "thread": "nvmf_tgt_poll_group_000",
00:15:36.544  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:36.544  "listen_address": {
00:15:36.544  "trtype": "TCP",
00:15:36.544  "adrfam": "IPv4",
00:15:36.544  "traddr": "10.0.0.2",
00:15:36.544  "trsvcid": "4420"
00:15:36.544  },
00:15:36.544  "peer_address": {
00:15:36.544  "trtype": "TCP",
00:15:36.544  "adrfam": "IPv4",
00:15:36.544  "traddr": "10.0.0.1",
00:15:36.544  "trsvcid": "54544"
00:15:36.544  },
00:15:36.544  "auth": {
00:15:36.544  "state": "completed",
00:15:36.544  "digest": "sha384",
00:15:36.544  "dhgroup": "ffdhe4096"
00:15:36.544  }
00:15:36.544  }
00:15:36.544  ]'
00:15:36.544    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:36.544   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:36.544    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:36.544   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:15:36.544    04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:36.544   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:36.544   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:36.544   04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:36.801   04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:36.801   04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:37.734  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:37.734   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:37.993   04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:38.559  
00:15:38.559    04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:38.559    04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:38.559    04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:38.816   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:38.816    04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:38.816    04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:38.816    04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:38.816    04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:38.816   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:38.816  {
00:15:38.816  "cntlid": 77,
00:15:38.816  "qid": 0,
00:15:38.816  "state": "enabled",
00:15:38.816  "thread": "nvmf_tgt_poll_group_000",
00:15:38.816  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:38.816  "listen_address": {
00:15:38.816  "trtype": "TCP",
00:15:38.816  "adrfam": "IPv4",
00:15:38.816  "traddr": "10.0.0.2",
00:15:38.816  "trsvcid": "4420"
00:15:38.816  },
00:15:38.816  "peer_address": {
00:15:38.816  "trtype": "TCP",
00:15:38.816  "adrfam": "IPv4",
00:15:38.816  "traddr": "10.0.0.1",
00:15:38.816  "trsvcid": "35198"
00:15:38.816  },
00:15:38.816  "auth": {
00:15:38.816  "state": "completed",
00:15:38.816  "digest": "sha384",
00:15:38.816  "dhgroup": "ffdhe4096"
00:15:38.816  }
00:15:38.816  }
00:15:38.816  ]'
00:15:38.816    04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:38.816   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:38.816    04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:38.816   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:15:38.816    04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:38.816   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:38.816   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:38.816   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:39.073   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:39.073   04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:40.003  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:40.003   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:40.261   04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:40.824  
00:15:40.824    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:40.824    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:40.824    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:41.081   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:41.081    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:41.081    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:41.081    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:41.081    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:41.081   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:41.081  {
00:15:41.081  "cntlid": 79,
00:15:41.081  "qid": 0,
00:15:41.081  "state": "enabled",
00:15:41.081  "thread": "nvmf_tgt_poll_group_000",
00:15:41.081  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:41.081  "listen_address": {
00:15:41.081  "trtype": "TCP",
00:15:41.081  "adrfam": "IPv4",
00:15:41.081  "traddr": "10.0.0.2",
00:15:41.081  "trsvcid": "4420"
00:15:41.081  },
00:15:41.081  "peer_address": {
00:15:41.081  "trtype": "TCP",
00:15:41.081  "adrfam": "IPv4",
00:15:41.081  "traddr": "10.0.0.1",
00:15:41.081  "trsvcid": "35222"
00:15:41.081  },
00:15:41.081  "auth": {
00:15:41.081  "state": "completed",
00:15:41.081  "digest": "sha384",
00:15:41.081  "dhgroup": "ffdhe4096"
00:15:41.081  }
00:15:41.081  }
00:15:41.081  ]'
00:15:41.081    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:41.081   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:41.081    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:41.081   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:15:41.081    04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:41.081   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:41.081   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:41.081   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:41.339   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:41.339   04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:42.270  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:42.270   04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:42.527   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:43.092  
00:15:43.092    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:43.092    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:43.092    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:43.349   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:43.349    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:43.349    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:43.349    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:43.350    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:43.350   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:43.350  {
00:15:43.350  "cntlid": 81,
00:15:43.350  "qid": 0,
00:15:43.350  "state": "enabled",
00:15:43.350  "thread": "nvmf_tgt_poll_group_000",
00:15:43.350  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:43.350  "listen_address": {
00:15:43.350  "trtype": "TCP",
00:15:43.350  "adrfam": "IPv4",
00:15:43.350  "traddr": "10.0.0.2",
00:15:43.350  "trsvcid": "4420"
00:15:43.350  },
00:15:43.350  "peer_address": {
00:15:43.350  "trtype": "TCP",
00:15:43.350  "adrfam": "IPv4",
00:15:43.350  "traddr": "10.0.0.1",
00:15:43.350  "trsvcid": "35248"
00:15:43.350  },
00:15:43.350  "auth": {
00:15:43.350  "state": "completed",
00:15:43.350  "digest": "sha384",
00:15:43.350  "dhgroup": "ffdhe6144"
00:15:43.350  }
00:15:43.350  }
00:15:43.350  ]'
00:15:43.350    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:43.350   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:43.350    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:43.350   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:15:43.350    04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:43.607   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:43.607   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:43.607   04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:43.865   04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:43.865   04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:44.798   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:44.798  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:44.798   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:44.798   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:44.798   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:44.798   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:44.798   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:44.798   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:44.799   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:45.055   04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:45.619  
00:15:45.619    04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:45.619    04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:45.619    04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:45.877   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:45.877    04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:45.877    04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:45.877    04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:45.877    04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:45.877   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:45.877  {
00:15:45.877  "cntlid": 83,
00:15:45.877  "qid": 0,
00:15:45.877  "state": "enabled",
00:15:45.877  "thread": "nvmf_tgt_poll_group_000",
00:15:45.877  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:45.877  "listen_address": {
00:15:45.877  "trtype": "TCP",
00:15:45.877  "adrfam": "IPv4",
00:15:45.877  "traddr": "10.0.0.2",
00:15:45.877  "trsvcid": "4420"
00:15:45.877  },
00:15:45.877  "peer_address": {
00:15:45.877  "trtype": "TCP",
00:15:45.877  "adrfam": "IPv4",
00:15:45.877  "traddr": "10.0.0.1",
00:15:45.877  "trsvcid": "35266"
00:15:45.877  },
00:15:45.877  "auth": {
00:15:45.877  "state": "completed",
00:15:45.877  "digest": "sha384",
00:15:45.877  "dhgroup": "ffdhe6144"
00:15:45.877  }
00:15:45.877  }
00:15:45.877  ]'
00:15:45.877    04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:45.877   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:45.877    04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:45.877   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:15:45.877    04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:45.877   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:45.877   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:45.877   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:46.135   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:46.135   04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:47.068  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:47.068   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:47.325   04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:47.889  
00:15:47.889    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:47.889    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:47.889    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:48.146   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:48.146    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:48.146    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:48.146    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:48.146    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:48.146   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:48.146  {
00:15:48.146  "cntlid": 85,
00:15:48.146  "qid": 0,
00:15:48.146  "state": "enabled",
00:15:48.146  "thread": "nvmf_tgt_poll_group_000",
00:15:48.146  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:48.146  "listen_address": {
00:15:48.146  "trtype": "TCP",
00:15:48.146  "adrfam": "IPv4",
00:15:48.146  "traddr": "10.0.0.2",
00:15:48.146  "trsvcid": "4420"
00:15:48.146  },
00:15:48.146  "peer_address": {
00:15:48.146  "trtype": "TCP",
00:15:48.146  "adrfam": "IPv4",
00:15:48.146  "traddr": "10.0.0.1",
00:15:48.146  "trsvcid": "43934"
00:15:48.146  },
00:15:48.146  "auth": {
00:15:48.146  "state": "completed",
00:15:48.146  "digest": "sha384",
00:15:48.146  "dhgroup": "ffdhe6144"
00:15:48.146  }
00:15:48.146  }
00:15:48.146  ]'
00:15:48.146    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:48.146   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:48.146    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:48.146   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:15:48.146    04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:48.146   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:48.146   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:48.146   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:48.710   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:48.710   04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:49.273   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:49.530  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:49.530   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:49.530   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.530   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:49.530   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.530   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:49.530   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:49.530   04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:15:49.839   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3
00:15:49.839   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:49.839   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:49.839   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:15:49.839   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:15:49.839   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:49.839   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:15:49.840   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.840   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:49.840   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.840   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:15:49.840   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:49.840   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:15:50.096  
00:15:50.352    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:50.352    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:50.352    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:50.608   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:50.608    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:50.608    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:50.608    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:50.608    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:50.608   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:50.608  {
00:15:50.608  "cntlid": 87,
00:15:50.608  "qid": 0,
00:15:50.608  "state": "enabled",
00:15:50.608  "thread": "nvmf_tgt_poll_group_000",
00:15:50.608  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:50.608  "listen_address": {
00:15:50.608  "trtype": "TCP",
00:15:50.608  "adrfam": "IPv4",
00:15:50.608  "traddr": "10.0.0.2",
00:15:50.608  "trsvcid": "4420"
00:15:50.608  },
00:15:50.608  "peer_address": {
00:15:50.608  "trtype": "TCP",
00:15:50.608  "adrfam": "IPv4",
00:15:50.608  "traddr": "10.0.0.1",
00:15:50.608  "trsvcid": "43968"
00:15:50.608  },
00:15:50.608  "auth": {
00:15:50.608  "state": "completed",
00:15:50.608  "digest": "sha384",
00:15:50.608  "dhgroup": "ffdhe6144"
00:15:50.608  }
00:15:50.608  }
00:15:50.608  ]'
00:15:50.608    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:50.608   04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:50.608    04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:50.608   04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:15:50.608    04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:50.608   04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:50.608   04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:50.609   04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:50.865   04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:50.865   04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:15:51.795   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:51.795  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:15:51.796   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:52.053   04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:15:52.985  
00:15:52.985    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:52.985    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:52.985    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:53.243   04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:53.243    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:53.243    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.243    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:53.243    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.243   04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:53.243  {
00:15:53.243  "cntlid": 89,
00:15:53.243  "qid": 0,
00:15:53.243  "state": "enabled",
00:15:53.243  "thread": "nvmf_tgt_poll_group_000",
00:15:53.243  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:53.244  "listen_address": {
00:15:53.244  "trtype": "TCP",
00:15:53.244  "adrfam": "IPv4",
00:15:53.244  "traddr": "10.0.0.2",
00:15:53.244  "trsvcid": "4420"
00:15:53.244  },
00:15:53.244  "peer_address": {
00:15:53.244  "trtype": "TCP",
00:15:53.244  "adrfam": "IPv4",
00:15:53.244  "traddr": "10.0.0.1",
00:15:53.244  "trsvcid": "43996"
00:15:53.244  },
00:15:53.244  "auth": {
00:15:53.244  "state": "completed",
00:15:53.244  "digest": "sha384",
00:15:53.244  "dhgroup": "ffdhe8192"
00:15:53.244  }
00:15:53.244  }
00:15:53.244  ]'
00:15:53.244    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:53.244   04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:53.244    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:53.244   04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:15:53.244    04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:53.244   04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:53.244   04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:53.244   04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:53.502   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:53.502   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:54.435  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:15:54.435   04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:54.693   04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:15:55.625  
00:15:55.625    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:55.625    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:55.625    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:55.883   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:55.883    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:55.883    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:55.883    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:55.883    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:55.883   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:55.883  {
00:15:55.883  "cntlid": 91,
00:15:55.883  "qid": 0,
00:15:55.883  "state": "enabled",
00:15:55.883  "thread": "nvmf_tgt_poll_group_000",
00:15:55.883  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:55.883  "listen_address": {
00:15:55.883  "trtype": "TCP",
00:15:55.883  "adrfam": "IPv4",
00:15:55.883  "traddr": "10.0.0.2",
00:15:55.883  "trsvcid": "4420"
00:15:55.883  },
00:15:55.883  "peer_address": {
00:15:55.883  "trtype": "TCP",
00:15:55.883  "adrfam": "IPv4",
00:15:55.883  "traddr": "10.0.0.1",
00:15:55.883  "trsvcid": "44038"
00:15:55.883  },
00:15:55.883  "auth": {
00:15:55.883  "state": "completed",
00:15:55.883  "digest": "sha384",
00:15:55.883  "dhgroup": "ffdhe8192"
00:15:55.883  }
00:15:55.883  }
00:15:55.883  ]'
00:15:55.883    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:55.883   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:55.883    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:55.883   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:15:55.883    04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:55.883   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:55.883   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:55.883   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:56.140   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:56.140   04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:57.071  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:15:57.071   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:57.328   04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:15:58.261  
00:15:58.261    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:15:58.261    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:15:58.261    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:15:58.518   04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:15:58.518    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:15:58.518    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:58.518    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:58.518    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:58.518   04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:15:58.518  {
00:15:58.518  "cntlid": 93,
00:15:58.518  "qid": 0,
00:15:58.518  "state": "enabled",
00:15:58.518  "thread": "nvmf_tgt_poll_group_000",
00:15:58.518  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:15:58.518  "listen_address": {
00:15:58.518  "trtype": "TCP",
00:15:58.518  "adrfam": "IPv4",
00:15:58.518  "traddr": "10.0.0.2",
00:15:58.518  "trsvcid": "4420"
00:15:58.518  },
00:15:58.518  "peer_address": {
00:15:58.518  "trtype": "TCP",
00:15:58.518  "adrfam": "IPv4",
00:15:58.518  "traddr": "10.0.0.1",
00:15:58.518  "trsvcid": "50648"
00:15:58.518  },
00:15:58.518  "auth": {
00:15:58.518  "state": "completed",
00:15:58.518  "digest": "sha384",
00:15:58.518  "dhgroup": "ffdhe8192"
00:15:58.518  }
00:15:58.518  }
00:15:58.518  ]'
00:15:58.518    04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:15:58.518   04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:15:58.518    04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:15:58.518   04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:15:58.518    04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:15:58.775   04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:15:58.775   04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:15:58.775   04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:15:59.033   04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:59.033   04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:15:59.963  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:15:59.963   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:00.219   04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:01.149  
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:01.149   04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.149   04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:01.149  {
00:16:01.149  "cntlid": 95,
00:16:01.149  "qid": 0,
00:16:01.149  "state": "enabled",
00:16:01.149  "thread": "nvmf_tgt_poll_group_000",
00:16:01.149  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:01.149  "listen_address": {
00:16:01.149  "trtype": "TCP",
00:16:01.149  "adrfam": "IPv4",
00:16:01.149  "traddr": "10.0.0.2",
00:16:01.149  "trsvcid": "4420"
00:16:01.149  },
00:16:01.149  "peer_address": {
00:16:01.149  "trtype": "TCP",
00:16:01.149  "adrfam": "IPv4",
00:16:01.149  "traddr": "10.0.0.1",
00:16:01.149  "trsvcid": "50678"
00:16:01.149  },
00:16:01.149  "auth": {
00:16:01.149  "state": "completed",
00:16:01.149  "digest": "sha384",
00:16:01.149  "dhgroup": "ffdhe8192"
00:16:01.149  }
00:16:01.149  }
00:16:01.149  ]'
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:01.149   04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:16:01.149    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:01.406   04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:16:01.406    04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:01.406   04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:01.406   04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:01.406   04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:01.665   04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:01.665   04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:02.600  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:02.600   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:02.859   04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:03.163  
00:16:03.436    04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:03.436    04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:03.436    04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:03.708   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:03.708    04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:03.708    04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:03.708    04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:03.708    04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:03.708   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:03.708  {
00:16:03.708  "cntlid": 97,
00:16:03.708  "qid": 0,
00:16:03.708  "state": "enabled",
00:16:03.708  "thread": "nvmf_tgt_poll_group_000",
00:16:03.708  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:03.708  "listen_address": {
00:16:03.708  "trtype": "TCP",
00:16:03.708  "adrfam": "IPv4",
00:16:03.708  "traddr": "10.0.0.2",
00:16:03.708  "trsvcid": "4420"
00:16:03.708  },
00:16:03.708  "peer_address": {
00:16:03.708  "trtype": "TCP",
00:16:03.708  "adrfam": "IPv4",
00:16:03.708  "traddr": "10.0.0.1",
00:16:03.708  "trsvcid": "50702"
00:16:03.709  },
00:16:03.709  "auth": {
00:16:03.709  "state": "completed",
00:16:03.709  "digest": "sha512",
00:16:03.709  "dhgroup": "null"
00:16:03.709  }
00:16:03.709  }
00:16:03.709  ]'
00:16:03.709    04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:03.709   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:03.709    04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:03.709   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:16:03.709    04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:03.709   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:03.709   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:03.709   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:03.984   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:03.984   04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:04.973   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:04.973  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:04.973   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:04.973   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.973   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:04.973   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.973   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:04.974   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:04.974   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:05.247   04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:05.522  
00:16:05.797    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:05.797    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:05.797    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:06.073   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:06.073    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:06.073    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.073    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:06.073    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.073   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:06.073  {
00:16:06.073  "cntlid": 99,
00:16:06.073  "qid": 0,
00:16:06.073  "state": "enabled",
00:16:06.073  "thread": "nvmf_tgt_poll_group_000",
00:16:06.073  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:06.073  "listen_address": {
00:16:06.073  "trtype": "TCP",
00:16:06.073  "adrfam": "IPv4",
00:16:06.073  "traddr": "10.0.0.2",
00:16:06.073  "trsvcid": "4420"
00:16:06.073  },
00:16:06.073  "peer_address": {
00:16:06.073  "trtype": "TCP",
00:16:06.073  "adrfam": "IPv4",
00:16:06.073  "traddr": "10.0.0.1",
00:16:06.073  "trsvcid": "50726"
00:16:06.073  },
00:16:06.073  "auth": {
00:16:06.073  "state": "completed",
00:16:06.073  "digest": "sha512",
00:16:06.073  "dhgroup": "null"
00:16:06.073  }
00:16:06.073  }
00:16:06.073  ]'
00:16:06.073    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:06.073   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:06.073    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:06.073   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:16:06.073    04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:06.073   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:06.073   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:06.073   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:06.345   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:06.345   04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:07.334  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:07.334   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:07.616   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2
00:16:07.616   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:07.616   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:07.617   04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:07.888  
00:16:07.888    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:07.888    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:07.888    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:08.163   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:08.163    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:08.163    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:08.163    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:08.163    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:08.163   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:08.163  {
00:16:08.163  "cntlid": 101,
00:16:08.163  "qid": 0,
00:16:08.163  "state": "enabled",
00:16:08.163  "thread": "nvmf_tgt_poll_group_000",
00:16:08.163  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:08.163  "listen_address": {
00:16:08.163  "trtype": "TCP",
00:16:08.163  "adrfam": "IPv4",
00:16:08.163  "traddr": "10.0.0.2",
00:16:08.163  "trsvcid": "4420"
00:16:08.163  },
00:16:08.163  "peer_address": {
00:16:08.163  "trtype": "TCP",
00:16:08.163  "adrfam": "IPv4",
00:16:08.163  "traddr": "10.0.0.1",
00:16:08.163  "trsvcid": "57920"
00:16:08.163  },
00:16:08.163  "auth": {
00:16:08.163  "state": "completed",
00:16:08.163  "digest": "sha512",
00:16:08.163  "dhgroup": "null"
00:16:08.163  }
00:16:08.163  }
00:16:08.163  ]'
00:16:08.163    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:08.163   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:08.163    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:08.163   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:16:08.163    04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:08.163   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:08.163   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:08.163   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:08.450   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:08.450   04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:09.384  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:09.384   04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:09.642   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:09.899  
00:16:09.899    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:09.899    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:09.899    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:10.156   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:10.156    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:10.156    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:10.157    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:10.157    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:10.157   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:10.157  {
00:16:10.157  "cntlid": 103,
00:16:10.157  "qid": 0,
00:16:10.157  "state": "enabled",
00:16:10.157  "thread": "nvmf_tgt_poll_group_000",
00:16:10.157  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:10.157  "listen_address": {
00:16:10.157  "trtype": "TCP",
00:16:10.157  "adrfam": "IPv4",
00:16:10.157  "traddr": "10.0.0.2",
00:16:10.157  "trsvcid": "4420"
00:16:10.157  },
00:16:10.157  "peer_address": {
00:16:10.157  "trtype": "TCP",
00:16:10.157  "adrfam": "IPv4",
00:16:10.157  "traddr": "10.0.0.1",
00:16:10.157  "trsvcid": "57942"
00:16:10.157  },
00:16:10.157  "auth": {
00:16:10.157  "state": "completed",
00:16:10.157  "digest": "sha512",
00:16:10.157  "dhgroup": "null"
00:16:10.157  }
00:16:10.157  }
00:16:10.157  ]'
00:16:10.157    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:10.413   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:10.413    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:10.413   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:16:10.413    04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:10.413   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:10.413   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:10.413   04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:10.670   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:10.670   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:11.603  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:11.603   04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:11.862   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:12.120  
00:16:12.120    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:12.120    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:12.120    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:12.378   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:12.378    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:12.378    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:12.378    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:12.378    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:12.378   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:12.378  {
00:16:12.378  "cntlid": 105,
00:16:12.378  "qid": 0,
00:16:12.378  "state": "enabled",
00:16:12.378  "thread": "nvmf_tgt_poll_group_000",
00:16:12.378  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:12.378  "listen_address": {
00:16:12.378  "trtype": "TCP",
00:16:12.378  "adrfam": "IPv4",
00:16:12.378  "traddr": "10.0.0.2",
00:16:12.378  "trsvcid": "4420"
00:16:12.378  },
00:16:12.378  "peer_address": {
00:16:12.378  "trtype": "TCP",
00:16:12.378  "adrfam": "IPv4",
00:16:12.378  "traddr": "10.0.0.1",
00:16:12.378  "trsvcid": "57968"
00:16:12.378  },
00:16:12.378  "auth": {
00:16:12.378  "state": "completed",
00:16:12.378  "digest": "sha512",
00:16:12.378  "dhgroup": "ffdhe2048"
00:16:12.378  }
00:16:12.378  }
00:16:12.378  ]'
00:16:12.378    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:12.636   04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:12.636    04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:12.636   04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:16:12.636    04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:12.636   04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:12.636   04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:12.636   04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:12.894   04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:12.894   04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:13.828  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:13.828   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:14.086   04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:14.345  
00:16:14.345    04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:14.345    04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:14.345    04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:14.603   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:14.603    04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:14.603    04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:14.603    04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:14.603    04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:14.603   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:14.603  {
00:16:14.603  "cntlid": 107,
00:16:14.603  "qid": 0,
00:16:14.603  "state": "enabled",
00:16:14.603  "thread": "nvmf_tgt_poll_group_000",
00:16:14.603  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:14.603  "listen_address": {
00:16:14.603  "trtype": "TCP",
00:16:14.603  "adrfam": "IPv4",
00:16:14.603  "traddr": "10.0.0.2",
00:16:14.603  "trsvcid": "4420"
00:16:14.603  },
00:16:14.603  "peer_address": {
00:16:14.603  "trtype": "TCP",
00:16:14.603  "adrfam": "IPv4",
00:16:14.603  "traddr": "10.0.0.1",
00:16:14.603  "trsvcid": "57988"
00:16:14.603  },
00:16:14.603  "auth": {
00:16:14.603  "state": "completed",
00:16:14.603  "digest": "sha512",
00:16:14.603  "dhgroup": "ffdhe2048"
00:16:14.603  }
00:16:14.603  }
00:16:14.603  ]'
00:16:14.603    04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:14.861   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:14.861    04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:14.861   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:16:14.861    04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:14.861   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:14.861   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:14.861   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:15.120   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:15.120   04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:16.054   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:16.055  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:16.055   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:16.055   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:16.055   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:16.055   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:16.055   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:16.055   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:16.055   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:16.313   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2
00:16:16.313   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:16.313   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:16.313   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:16:16.313   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:16.314   04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:16.582  
00:16:16.582    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:16.582    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:16.582    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:16.838   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:16.838    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:16.839    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:16.839    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:16.839    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:16.839   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:16.839  {
00:16:16.839  "cntlid": 109,
00:16:16.839  "qid": 0,
00:16:16.839  "state": "enabled",
00:16:16.839  "thread": "nvmf_tgt_poll_group_000",
00:16:16.839  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:16.839  "listen_address": {
00:16:16.839  "trtype": "TCP",
00:16:16.839  "adrfam": "IPv4",
00:16:16.839  "traddr": "10.0.0.2",
00:16:16.839  "trsvcid": "4420"
00:16:16.839  },
00:16:16.839  "peer_address": {
00:16:16.839  "trtype": "TCP",
00:16:16.839  "adrfam": "IPv4",
00:16:16.839  "traddr": "10.0.0.1",
00:16:16.839  "trsvcid": "58022"
00:16:16.839  },
00:16:16.839  "auth": {
00:16:16.839  "state": "completed",
00:16:16.839  "digest": "sha512",
00:16:16.839  "dhgroup": "ffdhe2048"
00:16:16.839  }
00:16:16.839  }
00:16:16.839  ]'
00:16:16.839    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:16.839   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:16.839    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:17.095   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:16:17.095    04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:17.095   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:17.095   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:17.095   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:17.352   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:17.352   04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:18.284  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:18.284   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:18.540   04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:18.813  
00:16:18.813    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:18.813    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:18.813    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:19.069   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:19.070    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:19.070    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:19.070    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:19.070    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:19.070   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:19.070  {
00:16:19.070  "cntlid": 111,
00:16:19.070  "qid": 0,
00:16:19.070  "state": "enabled",
00:16:19.070  "thread": "nvmf_tgt_poll_group_000",
00:16:19.070  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:19.070  "listen_address": {
00:16:19.070  "trtype": "TCP",
00:16:19.070  "adrfam": "IPv4",
00:16:19.070  "traddr": "10.0.0.2",
00:16:19.070  "trsvcid": "4420"
00:16:19.070  },
00:16:19.070  "peer_address": {
00:16:19.070  "trtype": "TCP",
00:16:19.070  "adrfam": "IPv4",
00:16:19.070  "traddr": "10.0.0.1",
00:16:19.070  "trsvcid": "38166"
00:16:19.070  },
00:16:19.070  "auth": {
00:16:19.070  "state": "completed",
00:16:19.070  "digest": "sha512",
00:16:19.070  "dhgroup": "ffdhe2048"
00:16:19.070  }
00:16:19.070  }
00:16:19.070  ]'
00:16:19.070    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:19.070   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:19.326    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:19.326   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:16:19.326    04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:19.326   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:19.326   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:19.326   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:19.583   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:19.583   04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:20.514   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:20.514  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:20.514   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:20.514   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.514   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:20.514   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.514   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:16:20.514   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:20.515   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:20.515   04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:20.772   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:20.773   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:21.030  
00:16:21.030    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:21.030    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:21.030    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:21.289   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:21.289    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:21.289    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:21.289    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:21.289    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:21.289   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:21.289  {
00:16:21.289  "cntlid": 113,
00:16:21.289  "qid": 0,
00:16:21.289  "state": "enabled",
00:16:21.289  "thread": "nvmf_tgt_poll_group_000",
00:16:21.289  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:21.289  "listen_address": {
00:16:21.289  "trtype": "TCP",
00:16:21.289  "adrfam": "IPv4",
00:16:21.289  "traddr": "10.0.0.2",
00:16:21.289  "trsvcid": "4420"
00:16:21.289  },
00:16:21.289  "peer_address": {
00:16:21.289  "trtype": "TCP",
00:16:21.289  "adrfam": "IPv4",
00:16:21.289  "traddr": "10.0.0.1",
00:16:21.289  "trsvcid": "38198"
00:16:21.289  },
00:16:21.289  "auth": {
00:16:21.289  "state": "completed",
00:16:21.289  "digest": "sha512",
00:16:21.289  "dhgroup": "ffdhe3072"
00:16:21.289  }
00:16:21.289  }
00:16:21.289  ]'
00:16:21.289    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:21.548   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:21.548    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:21.548   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:16:21.548    04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:21.548   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:21.548   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:21.548   04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:21.809   04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:21.809   04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:22.745  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:22.745   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:23.003   04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:23.262  
00:16:23.262    04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:23.262    04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:23.262    04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:23.828   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:23.828    04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:23.828    04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:23.828    04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:23.828    04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:23.828   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:23.828  {
00:16:23.828  "cntlid": 115,
00:16:23.828  "qid": 0,
00:16:23.828  "state": "enabled",
00:16:23.828  "thread": "nvmf_tgt_poll_group_000",
00:16:23.828  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:23.828  "listen_address": {
00:16:23.828  "trtype": "TCP",
00:16:23.828  "adrfam": "IPv4",
00:16:23.828  "traddr": "10.0.0.2",
00:16:23.828  "trsvcid": "4420"
00:16:23.828  },
00:16:23.828  "peer_address": {
00:16:23.828  "trtype": "TCP",
00:16:23.828  "adrfam": "IPv4",
00:16:23.828  "traddr": "10.0.0.1",
00:16:23.828  "trsvcid": "38240"
00:16:23.828  },
00:16:23.828  "auth": {
00:16:23.828  "state": "completed",
00:16:23.828  "digest": "sha512",
00:16:23.828  "dhgroup": "ffdhe3072"
00:16:23.828  }
00:16:23.828  }
00:16:23.828  ]'
00:16:23.828    04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:23.828   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:23.828    04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:23.828   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:16:23.828    04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:23.828   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:23.828   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:23.828   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:24.087   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:24.087   04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:25.021   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:25.021  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:25.021   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:25.021   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:25.021   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:25.022   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:25.022   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:25.022   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:25.022   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:25.280   04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:25.538  
00:16:25.538    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:25.538    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:25.538    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:25.797   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:25.797    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:25.797    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:25.797    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:25.797    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:25.797   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:25.797  {
00:16:25.797  "cntlid": 117,
00:16:25.797  "qid": 0,
00:16:25.797  "state": "enabled",
00:16:25.797  "thread": "nvmf_tgt_poll_group_000",
00:16:25.797  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:25.797  "listen_address": {
00:16:25.797  "trtype": "TCP",
00:16:25.797  "adrfam": "IPv4",
00:16:25.797  "traddr": "10.0.0.2",
00:16:25.797  "trsvcid": "4420"
00:16:25.797  },
00:16:25.797  "peer_address": {
00:16:25.797  "trtype": "TCP",
00:16:25.797  "adrfam": "IPv4",
00:16:25.797  "traddr": "10.0.0.1",
00:16:25.797  "trsvcid": "38262"
00:16:25.797  },
00:16:25.797  "auth": {
00:16:25.797  "state": "completed",
00:16:25.797  "digest": "sha512",
00:16:25.797  "dhgroup": "ffdhe3072"
00:16:25.797  }
00:16:25.797  }
00:16:25.797  ]'
00:16:25.797    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:26.055   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:26.055    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:26.055   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:16:26.055    04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:26.055   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:26.055   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:26.055   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:26.312   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:26.312   04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:27.243  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:27.243   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:27.501   04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:27.759  
00:16:27.759    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:27.759    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:27.759    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:28.017   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:28.017    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:28.017    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:28.017    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:28.017    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:28.017   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:28.017  {
00:16:28.017  "cntlid": 119,
00:16:28.017  "qid": 0,
00:16:28.017  "state": "enabled",
00:16:28.017  "thread": "nvmf_tgt_poll_group_000",
00:16:28.017  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:28.017  "listen_address": {
00:16:28.017  "trtype": "TCP",
00:16:28.017  "adrfam": "IPv4",
00:16:28.017  "traddr": "10.0.0.2",
00:16:28.017  "trsvcid": "4420"
00:16:28.017  },
00:16:28.017  "peer_address": {
00:16:28.017  "trtype": "TCP",
00:16:28.017  "adrfam": "IPv4",
00:16:28.017  "traddr": "10.0.0.1",
00:16:28.017  "trsvcid": "53244"
00:16:28.017  },
00:16:28.017  "auth": {
00:16:28.017  "state": "completed",
00:16:28.017  "digest": "sha512",
00:16:28.017  "dhgroup": "ffdhe3072"
00:16:28.017  }
00:16:28.017  }
00:16:28.017  ]'
00:16:28.017    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:28.274   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:28.274    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:28.274   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:16:28.275    04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:28.275   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:28.275   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:28.275   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:28.531   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:28.531   04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:29.462  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:29.462   04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:29.719   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:29.976  
00:16:29.976    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:29.976    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:29.976    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:30.234   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:30.234    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:30.234    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:30.234    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:30.234    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:30.234   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:30.234  {
00:16:30.234  "cntlid": 121,
00:16:30.234  "qid": 0,
00:16:30.234  "state": "enabled",
00:16:30.234  "thread": "nvmf_tgt_poll_group_000",
00:16:30.234  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:30.234  "listen_address": {
00:16:30.234  "trtype": "TCP",
00:16:30.234  "adrfam": "IPv4",
00:16:30.234  "traddr": "10.0.0.2",
00:16:30.234  "trsvcid": "4420"
00:16:30.234  },
00:16:30.234  "peer_address": {
00:16:30.234  "trtype": "TCP",
00:16:30.234  "adrfam": "IPv4",
00:16:30.234  "traddr": "10.0.0.1",
00:16:30.234  "trsvcid": "53272"
00:16:30.234  },
00:16:30.234  "auth": {
00:16:30.234  "state": "completed",
00:16:30.234  "digest": "sha512",
00:16:30.234  "dhgroup": "ffdhe4096"
00:16:30.234  }
00:16:30.234  }
00:16:30.234  ]'
00:16:30.234    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:30.492   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:30.492    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:30.492   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:16:30.492    04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:30.492   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:30.492   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:30.492   04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:30.750   04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:30.750   04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:31.689  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:31.689   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:31.947   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:31.948   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:31.948   04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:32.514  
00:16:32.514    04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:32.514    04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:32.514    04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:32.514   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:32.514    04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:32.514    04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:32.514    04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:32.772    04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:32.772   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:32.772  {
00:16:32.772  "cntlid": 123,
00:16:32.772  "qid": 0,
00:16:32.772  "state": "enabled",
00:16:32.772  "thread": "nvmf_tgt_poll_group_000",
00:16:32.772  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:32.772  "listen_address": {
00:16:32.772  "trtype": "TCP",
00:16:32.772  "adrfam": "IPv4",
00:16:32.772  "traddr": "10.0.0.2",
00:16:32.772  "trsvcid": "4420"
00:16:32.772  },
00:16:32.772  "peer_address": {
00:16:32.772  "trtype": "TCP",
00:16:32.772  "adrfam": "IPv4",
00:16:32.772  "traddr": "10.0.0.1",
00:16:32.772  "trsvcid": "53310"
00:16:32.772  },
00:16:32.772  "auth": {
00:16:32.772  "state": "completed",
00:16:32.772  "digest": "sha512",
00:16:32.772  "dhgroup": "ffdhe4096"
00:16:32.772  }
00:16:32.772  }
00:16:32.772  ]'
00:16:32.772    04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:32.772   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:32.772    04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:32.772   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:16:32.772    04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:32.772   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:32.772   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:32.772   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:33.035   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:33.035   04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:33.972  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:33.972   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:34.230   04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:34.796  
00:16:34.796    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:34.797    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:34.797    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:35.055   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:35.055    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:35.055    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:35.055    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:35.055    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:35.055   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:35.055  {
00:16:35.055  "cntlid": 125,
00:16:35.055  "qid": 0,
00:16:35.055  "state": "enabled",
00:16:35.055  "thread": "nvmf_tgt_poll_group_000",
00:16:35.055  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:35.055  "listen_address": {
00:16:35.055  "trtype": "TCP",
00:16:35.055  "adrfam": "IPv4",
00:16:35.055  "traddr": "10.0.0.2",
00:16:35.055  "trsvcid": "4420"
00:16:35.055  },
00:16:35.055  "peer_address": {
00:16:35.055  "trtype": "TCP",
00:16:35.055  "adrfam": "IPv4",
00:16:35.055  "traddr": "10.0.0.1",
00:16:35.055  "trsvcid": "53336"
00:16:35.055  },
00:16:35.055  "auth": {
00:16:35.055  "state": "completed",
00:16:35.055  "digest": "sha512",
00:16:35.055  "dhgroup": "ffdhe4096"
00:16:35.055  }
00:16:35.055  }
00:16:35.055  ]'
00:16:35.055    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:35.055   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:35.055    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:35.055   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:16:35.055    04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:35.055   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:35.055   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:35.055   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:35.313   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:35.313   04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:36.247  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:36.247   04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:36.506   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:37.072  
00:16:37.072    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:37.072    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:37.072    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:37.329   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:37.329    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:37.329    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:37.329    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:37.329    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:37.329   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:37.329  {
00:16:37.329  "cntlid": 127,
00:16:37.329  "qid": 0,
00:16:37.329  "state": "enabled",
00:16:37.329  "thread": "nvmf_tgt_poll_group_000",
00:16:37.329  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:37.329  "listen_address": {
00:16:37.329  "trtype": "TCP",
00:16:37.329  "adrfam": "IPv4",
00:16:37.329  "traddr": "10.0.0.2",
00:16:37.329  "trsvcid": "4420"
00:16:37.329  },
00:16:37.329  "peer_address": {
00:16:37.329  "trtype": "TCP",
00:16:37.329  "adrfam": "IPv4",
00:16:37.329  "traddr": "10.0.0.1",
00:16:37.329  "trsvcid": "53348"
00:16:37.329  },
00:16:37.329  "auth": {
00:16:37.329  "state": "completed",
00:16:37.329  "digest": "sha512",
00:16:37.329  "dhgroup": "ffdhe4096"
00:16:37.329  }
00:16:37.329  }
00:16:37.329  ]'
00:16:37.329    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:37.329   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:37.329    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:37.329   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:16:37.329    04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:37.329   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:37.329   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:37.329   04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:37.894   04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:37.894   04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:38.826   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:38.827  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:38.827   04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:39.392  
00:16:39.650    04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:39.650    04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:39.650    04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:39.907   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:39.907    04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:39.907    04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:39.907    04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:39.907    04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:39.907   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:39.907  {
00:16:39.907  "cntlid": 129,
00:16:39.907  "qid": 0,
00:16:39.907  "state": "enabled",
00:16:39.907  "thread": "nvmf_tgt_poll_group_000",
00:16:39.907  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:39.907  "listen_address": {
00:16:39.907  "trtype": "TCP",
00:16:39.907  "adrfam": "IPv4",
00:16:39.907  "traddr": "10.0.0.2",
00:16:39.907  "trsvcid": "4420"
00:16:39.907  },
00:16:39.907  "peer_address": {
00:16:39.907  "trtype": "TCP",
00:16:39.907  "adrfam": "IPv4",
00:16:39.907  "traddr": "10.0.0.1",
00:16:39.907  "trsvcid": "47534"
00:16:39.907  },
00:16:39.907  "auth": {
00:16:39.907  "state": "completed",
00:16:39.907  "digest": "sha512",
00:16:39.907  "dhgroup": "ffdhe6144"
00:16:39.907  }
00:16:39.907  }
00:16:39.907  ]'
00:16:39.907    04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:39.907   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:39.907    04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:39.907   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:16:39.907    04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:39.907   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:39.907   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:39.907   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:40.164   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:40.164   04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:41.113   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:41.113  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:41.113   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:41.114   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:41.114   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:41.114   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:41.114   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:41.114   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:41.114   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:41.373   04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:41.940  
00:16:41.940    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:41.940    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:41.940    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:42.199   04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:42.199    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:42.199    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:42.199    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:42.199    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:42.199   04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:42.199  {
00:16:42.199  "cntlid": 131,
00:16:42.199  "qid": 0,
00:16:42.199  "state": "enabled",
00:16:42.199  "thread": "nvmf_tgt_poll_group_000",
00:16:42.199  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:42.199  "listen_address": {
00:16:42.199  "trtype": "TCP",
00:16:42.199  "adrfam": "IPv4",
00:16:42.199  "traddr": "10.0.0.2",
00:16:42.199  "trsvcid": "4420"
00:16:42.199  },
00:16:42.199  "peer_address": {
00:16:42.199  "trtype": "TCP",
00:16:42.199  "adrfam": "IPv4",
00:16:42.199  "traddr": "10.0.0.1",
00:16:42.199  "trsvcid": "47556"
00:16:42.199  },
00:16:42.199  "auth": {
00:16:42.199  "state": "completed",
00:16:42.199  "digest": "sha512",
00:16:42.199  "dhgroup": "ffdhe6144"
00:16:42.199  }
00:16:42.199  }
00:16:42.199  ]'
00:16:42.199    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:42.199   04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:42.199    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:42.457   04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:16:42.457    04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:42.457   04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:42.457   04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:42.457   04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:42.716   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:42.716   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:43.651  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:43.651   04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:43.909   04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:44.475  
00:16:44.475    04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:44.475    04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:44.475    04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:44.733   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:44.733    04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:44.733    04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:44.733    04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:44.733    04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:44.733   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:44.733  {
00:16:44.733  "cntlid": 133,
00:16:44.733  "qid": 0,
00:16:44.733  "state": "enabled",
00:16:44.733  "thread": "nvmf_tgt_poll_group_000",
00:16:44.733  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:44.733  "listen_address": {
00:16:44.733  "trtype": "TCP",
00:16:44.733  "adrfam": "IPv4",
00:16:44.733  "traddr": "10.0.0.2",
00:16:44.733  "trsvcid": "4420"
00:16:44.733  },
00:16:44.733  "peer_address": {
00:16:44.733  "trtype": "TCP",
00:16:44.733  "adrfam": "IPv4",
00:16:44.733  "traddr": "10.0.0.1",
00:16:44.733  "trsvcid": "47586"
00:16:44.733  },
00:16:44.733  "auth": {
00:16:44.733  "state": "completed",
00:16:44.733  "digest": "sha512",
00:16:44.733  "dhgroup": "ffdhe6144"
00:16:44.733  }
00:16:44.733  }
00:16:44.733  ]'
00:16:44.733    04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:44.733   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:44.733    04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:44.733   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:16:44.733    04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:44.733   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:44.733   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:44.733   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:44.991   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:44.991   04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:45.924   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:45.924  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:45.924   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:45.924   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.924   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:45.924   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.924   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:45.925   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:45.925   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:16:46.488   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3
00:16:46.488   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:46.488   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:46.488   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:16:46.488   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:16:46.488   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:46.489   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:16:46.489   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.489   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:46.489   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.489   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:16:46.489   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:46.489   04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:47.057  
00:16:47.057    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:47.057    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:47.057    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:47.314   04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:47.314    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:47.314    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:47.314    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:47.314    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:47.314   04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:47.314  {
00:16:47.314  "cntlid": 135,
00:16:47.314  "qid": 0,
00:16:47.314  "state": "enabled",
00:16:47.314  "thread": "nvmf_tgt_poll_group_000",
00:16:47.314  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:47.314  "listen_address": {
00:16:47.314  "trtype": "TCP",
00:16:47.314  "adrfam": "IPv4",
00:16:47.314  "traddr": "10.0.0.2",
00:16:47.314  "trsvcid": "4420"
00:16:47.314  },
00:16:47.314  "peer_address": {
00:16:47.314  "trtype": "TCP",
00:16:47.314  "adrfam": "IPv4",
00:16:47.314  "traddr": "10.0.0.1",
00:16:47.314  "trsvcid": "47626"
00:16:47.314  },
00:16:47.314  "auth": {
00:16:47.314  "state": "completed",
00:16:47.314  "digest": "sha512",
00:16:47.314  "dhgroup": "ffdhe6144"
00:16:47.314  }
00:16:47.314  }
00:16:47.314  ]'
00:16:47.314    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:47.314   04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:47.315    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:47.315   04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:16:47.315    04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:47.315   04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:47.315   04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:47.315   04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:47.581   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:47.581   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:48.668  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:48.668   04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:48.668   04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:49.600  
00:16:49.600    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:49.600    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:49.600    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:49.858   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:49.858    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:49.858    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:49.858    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:49.858    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:49.858   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:49.858  {
00:16:49.858  "cntlid": 137,
00:16:49.858  "qid": 0,
00:16:49.858  "state": "enabled",
00:16:49.858  "thread": "nvmf_tgt_poll_group_000",
00:16:49.858  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:49.858  "listen_address": {
00:16:49.858  "trtype": "TCP",
00:16:49.858  "adrfam": "IPv4",
00:16:49.858  "traddr": "10.0.0.2",
00:16:49.858  "trsvcid": "4420"
00:16:49.858  },
00:16:49.858  "peer_address": {
00:16:49.858  "trtype": "TCP",
00:16:49.858  "adrfam": "IPv4",
00:16:49.858  "traddr": "10.0.0.1",
00:16:49.858  "trsvcid": "40622"
00:16:49.858  },
00:16:49.858  "auth": {
00:16:49.858  "state": "completed",
00:16:49.858  "digest": "sha512",
00:16:49.858  "dhgroup": "ffdhe8192"
00:16:49.858  }
00:16:49.858  }
00:16:49.858  ]'
00:16:49.858    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:49.858   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:49.858    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:49.858   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:16:49.858    04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:50.115   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:50.115   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:50.115   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:50.373   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:50.373   04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:51.305  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:51.305   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:51.563   04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:16:52.496  
00:16:52.496    04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:52.496    04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:52.496    04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:52.496   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:52.496    04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:52.496    04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:52.496    04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:52.496    04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:52.496   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:52.496  {
00:16:52.496  "cntlid": 139,
00:16:52.496  "qid": 0,
00:16:52.496  "state": "enabled",
00:16:52.496  "thread": "nvmf_tgt_poll_group_000",
00:16:52.496  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:52.496  "listen_address": {
00:16:52.496  "trtype": "TCP",
00:16:52.496  "adrfam": "IPv4",
00:16:52.496  "traddr": "10.0.0.2",
00:16:52.496  "trsvcid": "4420"
00:16:52.496  },
00:16:52.496  "peer_address": {
00:16:52.496  "trtype": "TCP",
00:16:52.496  "adrfam": "IPv4",
00:16:52.496  "traddr": "10.0.0.1",
00:16:52.496  "trsvcid": "40646"
00:16:52.496  },
00:16:52.496  "auth": {
00:16:52.496  "state": "completed",
00:16:52.496  "digest": "sha512",
00:16:52.496  "dhgroup": "ffdhe8192"
00:16:52.496  }
00:16:52.496  }
00:16:52.496  ]'
00:16:52.496    04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:52.753   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:52.753    04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:52.753   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:16:52.753    04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:52.753   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:52.753   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:52.754   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:53.011   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:53.011   04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==:
00:16:53.944   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:53.944  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:53.944   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:53.944   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:53.944   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:53.944   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:53.944   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:53.945   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:53.945   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:54.202   04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:16:55.133  
00:16:55.133    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:55.133    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:55.133    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:55.389   04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:55.389    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:55.389    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:55.389    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:55.389    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:55.389   04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:55.389  {
00:16:55.389  "cntlid": 141,
00:16:55.389  "qid": 0,
00:16:55.389  "state": "enabled",
00:16:55.389  "thread": "nvmf_tgt_poll_group_000",
00:16:55.389  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:55.389  "listen_address": {
00:16:55.389  "trtype": "TCP",
00:16:55.389  "adrfam": "IPv4",
00:16:55.389  "traddr": "10.0.0.2",
00:16:55.389  "trsvcid": "4420"
00:16:55.389  },
00:16:55.389  "peer_address": {
00:16:55.389  "trtype": "TCP",
00:16:55.389  "adrfam": "IPv4",
00:16:55.389  "traddr": "10.0.0.1",
00:16:55.389  "trsvcid": "40668"
00:16:55.389  },
00:16:55.389  "auth": {
00:16:55.389  "state": "completed",
00:16:55.389  "digest": "sha512",
00:16:55.389  "dhgroup": "ffdhe8192"
00:16:55.389  }
00:16:55.389  }
00:16:55.389  ]'
00:16:55.389    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:55.389   04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:55.389    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:55.389   04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:16:55.389    04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:55.389   04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:55.389   04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:55.389   04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:55.646   04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:55.646   04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa:
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:56.576  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:56.576   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:56.832   04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:16:57.760  
00:16:57.760    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:16:57.760    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:16:57.760    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:16:58.018   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:16:58.018    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:16:58.018    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.018    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:58.018    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.018   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:16:58.018  {
00:16:58.018  "cntlid": 143,
00:16:58.018  "qid": 0,
00:16:58.018  "state": "enabled",
00:16:58.018  "thread": "nvmf_tgt_poll_group_000",
00:16:58.018  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:16:58.018  "listen_address": {
00:16:58.018  "trtype": "TCP",
00:16:58.018  "adrfam": "IPv4",
00:16:58.018  "traddr": "10.0.0.2",
00:16:58.018  "trsvcid": "4420"
00:16:58.018  },
00:16:58.018  "peer_address": {
00:16:58.018  "trtype": "TCP",
00:16:58.018  "adrfam": "IPv4",
00:16:58.018  "traddr": "10.0.0.1",
00:16:58.018  "trsvcid": "43548"
00:16:58.018  },
00:16:58.018  "auth": {
00:16:58.018  "state": "completed",
00:16:58.018  "digest": "sha512",
00:16:58.018  "dhgroup": "ffdhe8192"
00:16:58.018  }
00:16:58.018  }
00:16:58.018  ]'
00:16:58.018    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:16:58.018   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:16:58.018    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:16:58.018   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:16:58.018    04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:16:58.018   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:16:58.018   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:16:58.018   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:16:58.583   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:58.583   04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:16:59.149   04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:16:59.407  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:16:59.407   04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:16:59.407   04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.407   04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:59.407   04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.407    04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:16:59.407    04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512
00:16:59.407    04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:16:59.407    04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:16:59.407   04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:16:59.407   04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:16:59.664   04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:17:00.597  
00:17:00.597    04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:17:00.597    04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:17:00.597    04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:00.597   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:00.597    04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:17:00.597    04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:00.597    04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:00.597    04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:00.597   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:17:00.597  {
00:17:00.597  "cntlid": 145,
00:17:00.597  "qid": 0,
00:17:00.597  "state": "enabled",
00:17:00.597  "thread": "nvmf_tgt_poll_group_000",
00:17:00.597  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:00.597  "listen_address": {
00:17:00.597  "trtype": "TCP",
00:17:00.597  "adrfam": "IPv4",
00:17:00.597  "traddr": "10.0.0.2",
00:17:00.597  "trsvcid": "4420"
00:17:00.597  },
00:17:00.597  "peer_address": {
00:17:00.597  "trtype": "TCP",
00:17:00.597  "adrfam": "IPv4",
00:17:00.597  "traddr": "10.0.0.1",
00:17:00.597  "trsvcid": "43580"
00:17:00.597  },
00:17:00.597  "auth": {
00:17:00.597  "state": "completed",
00:17:00.597  "digest": "sha512",
00:17:00.597  "dhgroup": "ffdhe8192"
00:17:00.597  }
00:17:00.598  }
00:17:00.598  ]'
00:17:00.598    04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:17:00.598   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:17:00.598    04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:17:00.856   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:17:00.856    04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:17:00.856   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:17:00.856   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:17:00.856   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:17:01.113   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:17:01.113   04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=:
00:17:02.048   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:17:02.048  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:17:02.048   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:02.048   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:02.049    04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:17:02.049   04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:17:02.983  request:
00:17:02.983  {
00:17:02.983    "name": "nvme0",
00:17:02.983    "trtype": "tcp",
00:17:02.983    "traddr": "10.0.0.2",
00:17:02.983    "adrfam": "ipv4",
00:17:02.983    "trsvcid": "4420",
00:17:02.983    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:17:02.983    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:02.983    "prchk_reftag": false,
00:17:02.983    "prchk_guard": false,
00:17:02.983    "hdgst": false,
00:17:02.983    "ddgst": false,
00:17:02.983    "dhchap_key": "key2",
00:17:02.983    "allow_unrecognized_csi": false,
00:17:02.983    "method": "bdev_nvme_attach_controller",
00:17:02.983    "req_id": 1
00:17:02.983  }
00:17:02.983  Got JSON-RPC error response
00:17:02.983  response:
00:17:02.983  {
00:17:02.983    "code": -5,
00:17:02.983    "message": "Input/output error"
00:17:02.983  }
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:02.983    04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:17:02.983   04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:17:03.566  request:
00:17:03.566  {
00:17:03.566    "name": "nvme0",
00:17:03.566    "trtype": "tcp",
00:17:03.566    "traddr": "10.0.0.2",
00:17:03.566    "adrfam": "ipv4",
00:17:03.566    "trsvcid": "4420",
00:17:03.566    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:17:03.566    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:03.566    "prchk_reftag": false,
00:17:03.566    "prchk_guard": false,
00:17:03.566    "hdgst": false,
00:17:03.566    "ddgst": false,
00:17:03.566    "dhchap_key": "key1",
00:17:03.566    "dhchap_ctrlr_key": "ckey2",
00:17:03.566    "allow_unrecognized_csi": false,
00:17:03.566    "method": "bdev_nvme_attach_controller",
00:17:03.566    "req_id": 1
00:17:03.566  }
00:17:03.566  Got JSON-RPC error response
00:17:03.566  response:
00:17:03.566  {
00:17:03.566    "code": -5,
00:17:03.566    "message": "Input/output error"
00:17:03.566  }
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:03.566    04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:17:03.566   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:17:04.498  request:
00:17:04.498  {
00:17:04.498    "name": "nvme0",
00:17:04.498    "trtype": "tcp",
00:17:04.498    "traddr": "10.0.0.2",
00:17:04.498    "adrfam": "ipv4",
00:17:04.498    "trsvcid": "4420",
00:17:04.498    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:17:04.498    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:04.498    "prchk_reftag": false,
00:17:04.498    "prchk_guard": false,
00:17:04.498    "hdgst": false,
00:17:04.498    "ddgst": false,
00:17:04.498    "dhchap_key": "key1",
00:17:04.498    "dhchap_ctrlr_key": "ckey1",
00:17:04.498    "allow_unrecognized_csi": false,
00:17:04.498    "method": "bdev_nvme_attach_controller",
00:17:04.498    "req_id": 1
00:17:04.498  }
00:17:04.498  Got JSON-RPC error response
00:17:04.498  response:
00:17:04.498  {
00:17:04.498    "code": -5,
00:17:04.498    "message": "Input/output error"
00:17:04.498  }
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 217205
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 217205 ']'
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 217205
00:17:04.498    04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:04.498    04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217205
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217205'
00:17:04.498  killing process with pid 217205
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 217205
00:17:04.498   04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 217205
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=240449
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 240449
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 240449 ']'
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:04.755   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 240449
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 240449 ']'
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:05.013  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:05.013   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.271   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:05.271   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:17:05.271   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd
00:17:05.271   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.271   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.528  null0
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PM3
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.RCN ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1NG ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Fwb ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:17:05.528   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]]
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:17:05.529   04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:17:06.908  nvme0n1
00:17:06.908    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:17:06.908    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:17:06.908    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:07.165   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:07.165    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:17:07.165    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:07.165    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:07.165    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:07.165   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:17:07.165  {
00:17:07.165  "cntlid": 1,
00:17:07.165  "qid": 0,
00:17:07.165  "state": "enabled",
00:17:07.165  "thread": "nvmf_tgt_poll_group_000",
00:17:07.165  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:07.165  "listen_address": {
00:17:07.165  "trtype": "TCP",
00:17:07.165  "adrfam": "IPv4",
00:17:07.165  "traddr": "10.0.0.2",
00:17:07.165  "trsvcid": "4420"
00:17:07.165  },
00:17:07.165  "peer_address": {
00:17:07.165  "trtype": "TCP",
00:17:07.165  "adrfam": "IPv4",
00:17:07.165  "traddr": "10.0.0.1",
00:17:07.165  "trsvcid": "43638"
00:17:07.165  },
00:17:07.165  "auth": {
00:17:07.165  "state": "completed",
00:17:07.165  "digest": "sha512",
00:17:07.165  "dhgroup": "ffdhe8192"
00:17:07.165  }
00:17:07.165  }
00:17:07.165  ]'
00:17:07.165    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:17:07.165   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:17:07.165    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:17:07.165   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:17:07.165    04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:17:07.165   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:17:07.165   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:17:07.165   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:17:07.422   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:17:07.422   04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:17:08.353  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256
00:17:08.353   04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:08.610    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:17:08.610   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:17:08.866  request:
00:17:08.866  {
00:17:08.866    "name": "nvme0",
00:17:08.866    "trtype": "tcp",
00:17:08.866    "traddr": "10.0.0.2",
00:17:08.866    "adrfam": "ipv4",
00:17:08.866    "trsvcid": "4420",
00:17:08.866    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:17:08.866    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:08.866    "prchk_reftag": false,
00:17:08.866    "prchk_guard": false,
00:17:08.866    "hdgst": false,
00:17:08.866    "ddgst": false,
00:17:08.866    "dhchap_key": "key3",
00:17:08.866    "allow_unrecognized_csi": false,
00:17:08.866    "method": "bdev_nvme_attach_controller",
00:17:08.866    "req_id": 1
00:17:08.866  }
00:17:08.867  Got JSON-RPC error response
00:17:08.867  response:
00:17:08.867  {
00:17:08.867    "code": -5,
00:17:08.867    "message": "Input/output error"
00:17:08.867  }
00:17:09.123   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:09.123   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:09.123   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:09.123   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:09.123    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=,
00:17:09.123    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512
00:17:09.123   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:17:09.123   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:09.381    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:17:09.381   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:17:09.638  request:
00:17:09.638  {
00:17:09.638    "name": "nvme0",
00:17:09.638    "trtype": "tcp",
00:17:09.638    "traddr": "10.0.0.2",
00:17:09.639    "adrfam": "ipv4",
00:17:09.639    "trsvcid": "4420",
00:17:09.639    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:17:09.639    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:09.639    "prchk_reftag": false,
00:17:09.639    "prchk_guard": false,
00:17:09.639    "hdgst": false,
00:17:09.639    "ddgst": false,
00:17:09.639    "dhchap_key": "key3",
00:17:09.639    "allow_unrecognized_csi": false,
00:17:09.639    "method": "bdev_nvme_attach_controller",
00:17:09.639    "req_id": 1
00:17:09.639  }
00:17:09.639  Got JSON-RPC error response
00:17:09.639  response:
00:17:09.639  {
00:17:09.639    "code": -5,
00:17:09.639    "message": "Input/output error"
00:17:09.639  }
00:17:09.639   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:09.639   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:09.639   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:09.639   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:09.639    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:17:09.639    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512
00:17:09.639    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:17:09.639    04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:17:09.639   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:17:09.639   04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:09.896    04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:17:09.896   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:17:10.461  request:
00:17:10.461  {
00:17:10.461    "name": "nvme0",
00:17:10.461    "trtype": "tcp",
00:17:10.461    "traddr": "10.0.0.2",
00:17:10.461    "adrfam": "ipv4",
00:17:10.461    "trsvcid": "4420",
00:17:10.461    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:17:10.461    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:10.461    "prchk_reftag": false,
00:17:10.461    "prchk_guard": false,
00:17:10.461    "hdgst": false,
00:17:10.461    "ddgst": false,
00:17:10.461    "dhchap_key": "key0",
00:17:10.461    "dhchap_ctrlr_key": "key1",
00:17:10.461    "allow_unrecognized_csi": false,
00:17:10.461    "method": "bdev_nvme_attach_controller",
00:17:10.461    "req_id": 1
00:17:10.461  }
00:17:10.461  Got JSON-RPC error response
00:17:10.461  response:
00:17:10.461  {
00:17:10.461    "code": -5,
00:17:10.461    "message": "Input/output error"
00:17:10.461  }
00:17:10.461   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:10.461   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:10.461   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:10.461   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:10.461   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0
00:17:10.461   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:17:10.461   04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:17:10.719  nvme0n1
00:17:10.719    04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers
00:17:10.719    04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name'
00:17:10.719    04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:10.976   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:10.976   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0
00:17:10.976   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:17:11.234   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1
00:17:11.234   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.234   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:11.234   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.234   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1
00:17:11.234   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:17:11.234   04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:17:12.609  nvme0n1
00:17:12.609    04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers
00:17:12.609    04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name'
00:17:12.609    04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:12.868   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:12.868   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:12.868   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.868   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:12.868   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.868    04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name'
00:17:12.868    04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers
00:17:12.868    04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:13.126   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:13.126   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:17:13.126   04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=:
00:17:14.058    04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr
00:17:14.058    04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev
00:17:14.058    04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme*
00:17:14.058    04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:17:14.058    04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0
00:17:14.058    04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break
00:17:14.058   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0
00:17:14.058   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0
00:17:14.058   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:17:14.315   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:14.316    04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:17:14.316   04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:17:15.288  request:
00:17:15.288  {
00:17:15.288    "name": "nvme0",
00:17:15.288    "trtype": "tcp",
00:17:15.288    "traddr": "10.0.0.2",
00:17:15.288    "adrfam": "ipv4",
00:17:15.288    "trsvcid": "4420",
00:17:15.288    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:17:15.288    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55",
00:17:15.288    "prchk_reftag": false,
00:17:15.288    "prchk_guard": false,
00:17:15.288    "hdgst": false,
00:17:15.288    "ddgst": false,
00:17:15.288    "dhchap_key": "key1",
00:17:15.288    "allow_unrecognized_csi": false,
00:17:15.288    "method": "bdev_nvme_attach_controller",
00:17:15.288    "req_id": 1
00:17:15.288  }
00:17:15.288  Got JSON-RPC error response
00:17:15.288  response:
00:17:15.288  {
00:17:15.288    "code": -5,
00:17:15.288    "message": "Input/output error"
00:17:15.288  }
00:17:15.288   04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:15.288   04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:15.288   04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:15.288   04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:15.288   04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:15.288   04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:15.288   04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:16.659  nvme0n1
00:17:16.659    04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers
00:17:16.659    04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:16.659    04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name'
00:17:16.915   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:16.915   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0
00:17:16.915   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:17:17.172   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:17.172   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:17.172   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:17.172   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:17.172   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0
00:17:17.172   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:17:17.172   04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:17:17.430  nvme0n1
00:17:17.430    04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers
00:17:17.430    04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name'
00:17:17.430    04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:17.687   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:17.687   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0
00:17:17.687   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: '' 2s
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy:
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: ]]
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy:
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]]
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:17:17.945   04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: 2s
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==:
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]]
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: ]]
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==:
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:17:20.471   04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:17:22.371  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:17:22.371   04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:17:23.743  nvme0n1
00:17:23.743   04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:23.743   04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.743   04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:23.743   04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.743   04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:23.743   04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:24.306    04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers
00:17:24.306    04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name'
00:17:24.306    04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:24.562   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:24.562   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:24.562   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:24.562   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:24.562   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:24.562   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0
00:17:24.562   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0
00:17:24.820    04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers
00:17:24.820    04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:24.820    04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name'
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:25.077    04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:17:25.077   04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:17:26.007  request:
00:17:26.007  {
00:17:26.007    "name": "nvme0",
00:17:26.007    "dhchap_key": "key1",
00:17:26.007    "dhchap_ctrlr_key": "key3",
00:17:26.007    "method": "bdev_nvme_set_keys",
00:17:26.007    "req_id": 1
00:17:26.007  }
00:17:26.007  Got JSON-RPC error response
00:17:26.007  response:
00:17:26.007  {
00:17:26.007    "code": -13,
00:17:26.007    "message": "Permission denied"
00:17:26.007  }
00:17:26.007   04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:26.007   04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:26.007   04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:26.007   04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:26.007    04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:17:26.007    04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:17:26.007    04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:26.263   04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 ))
00:17:26.263   04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s
00:17:27.193    04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:17:27.193    04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:17:27.193    04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 ))
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:17:27.450   04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:17:28.821  nvme0n1
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:28.821    04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:17:28.821   04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:17:29.754  request:
00:17:29.754  {
00:17:29.754    "name": "nvme0",
00:17:29.754    "dhchap_key": "key2",
00:17:29.754    "dhchap_ctrlr_key": "key0",
00:17:29.754    "method": "bdev_nvme_set_keys",
00:17:29.754    "req_id": 1
00:17:29.754  }
00:17:29.754  Got JSON-RPC error response
00:17:29.754  response:
00:17:29.754  {
00:17:29.754    "code": -13,
00:17:29.754    "message": "Permission denied"
00:17:29.754  }
00:17:29.754   04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:17:29.754   04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:29.754   04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:29.754   04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:29.754    04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:17:29.754    04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:29.754    04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:17:30.012   04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 ))
00:17:30.012   04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s
00:17:30.944    04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:17:30.944    04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:17:30.944    04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:17:31.203   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 ))
00:17:31.203   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT
00:17:31.203   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup
00:17:31.203   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 217228
00:17:31.203   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 217228 ']'
00:17:31.203   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 217228
00:17:31.203    04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:17:31.203   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:31.203    04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217228
00:17:31.461   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:17:31.461   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:17:31.461   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217228'
00:17:31.461  killing process with pid 217228
00:17:31.461   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 217228
00:17:31.461   04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 217228
00:17:31.718   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini
00:17:31.718   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:17:31.718   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync
00:17:31.718   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:17:31.719  rmmod nvme_tcp
00:17:31.719  rmmod nvme_fabrics
00:17:31.719  rmmod nvme_keyring
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 240449 ']'
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 240449
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 240449 ']'
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 240449
00:17:31.719    04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:31.719    04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240449
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240449'
00:17:31.719  killing process with pid 240449
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 240449
00:17:31.719   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 240449
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:31.977   04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:31.977    04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.PM3 /tmp/spdk.key-sha256.tVR /tmp/spdk.key-sha384.r4l /tmp/spdk.key-sha512.y9n /tmp/spdk.key-sha512.RCN /tmp/spdk.key-sha384.1NG /tmp/spdk.key-sha256.Fwb '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log
00:17:34.523  
00:17:34.523  real	3m33.435s
00:17:34.523  user	8m19.309s
00:17:34.523  sys	0m28.198s
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:17:34.523  ************************************
00:17:34.523  END TEST nvmf_auth_target
00:17:34.523  ************************************
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']'
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:17:34.523  ************************************
00:17:34.523  START TEST nvmf_bdevio_no_huge
00:17:34.523  ************************************
00:17:34.523   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:17:34.523  * Looking for test storage...
00:17:34.523  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-:
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-:
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<'
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:34.523     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:34.523    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:34.523  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:34.523  		--rc genhtml_branch_coverage=1
00:17:34.524  		--rc genhtml_function_coverage=1
00:17:34.524  		--rc genhtml_legend=1
00:17:34.524  		--rc geninfo_all_blocks=1
00:17:34.524  		--rc geninfo_unexecuted_blocks=1
00:17:34.524  		
00:17:34.524  		'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:34.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:34.524  		--rc genhtml_branch_coverage=1
00:17:34.524  		--rc genhtml_function_coverage=1
00:17:34.524  		--rc genhtml_legend=1
00:17:34.524  		--rc geninfo_all_blocks=1
00:17:34.524  		--rc geninfo_unexecuted_blocks=1
00:17:34.524  		
00:17:34.524  		'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:34.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:34.524  		--rc genhtml_branch_coverage=1
00:17:34.524  		--rc genhtml_function_coverage=1
00:17:34.524  		--rc genhtml_legend=1
00:17:34.524  		--rc geninfo_all_blocks=1
00:17:34.524  		--rc geninfo_unexecuted_blocks=1
00:17:34.524  		
00:17:34.524  		'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:34.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:34.524  		--rc genhtml_branch_coverage=1
00:17:34.524  		--rc genhtml_function_coverage=1
00:17:34.524  		--rc genhtml_legend=1
00:17:34.524  		--rc geninfo_all_blocks=1
00:17:34.524  		--rc geninfo_unexecuted_blocks=1
00:17:34.524  		
00:17:34.524  		'
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:17:34.524     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:34.524     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:17:34.524     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob
00:17:34.524     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:34.524     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:34.524     04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:34.524      04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:34.524      04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:34.524      04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:34.524      04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH
00:17:34.524      04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:17:34.524  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:34.524    04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable
00:17:34.524   04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=()
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=()
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=()
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=()
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=()
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=()
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=()
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:17:36.422   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:17:36.423  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:17:36.423  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:17:36.423  Found net devices under 0000:0a:00.0: cvl_0_0
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:17:36.423  Found net devices under 0000:0a:00.1: cvl_0_1
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:17:36.423   04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:17:36.681  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:17:36.681  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms
00:17:36.681  
00:17:36.681  --- 10.0.0.2 ping statistics ---
00:17:36.681  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:36.681  rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:17:36.681  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:17:36.681  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms
00:17:36.681  
00:17:36.681  --- 10.0.0.1 ping statistics ---
00:17:36.681  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:36.681  rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=245695
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 245695
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 245695 ']'
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:36.681  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:36.681   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.681  [2024-12-09 04:08:05.185001] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:17:36.681  [2024-12-09 04:08:05.185097] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ]
00:17:36.939  [2024-12-09 04:08:05.265362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:17:36.939  [2024-12-09 04:08:05.320102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:36.939  [2024-12-09 04:08:05.320169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:36.939  [2024-12-09 04:08:05.320192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:17:36.939  [2024-12-09 04:08:05.320202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:17:36.939  [2024-12-09 04:08:05.320213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:36.939  [2024-12-09 04:08:05.321186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:17:36.939  [2024-12-09 04:08:05.321248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:17:36.939  [2024-12-09 04:08:05.321315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:17:36.939  [2024-12-09 04:08:05.321321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.939  [2024-12-09 04:08:05.473239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.939  Malloc0
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.939   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:36.939  [2024-12-09 04:08:05.511682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:37.196   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:37.196   04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024
00:17:37.196    04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:17:37.196    04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=()
00:17:37.196    04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config
00:17:37.196    04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:17:37.196    04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:17:37.196  {
00:17:37.196    "params": {
00:17:37.196      "name": "Nvme$subsystem",
00:17:37.196      "trtype": "$TEST_TRANSPORT",
00:17:37.196      "traddr": "$NVMF_FIRST_TARGET_IP",
00:17:37.196      "adrfam": "ipv4",
00:17:37.196      "trsvcid": "$NVMF_PORT",
00:17:37.196      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:17:37.196      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:17:37.196      "hdgst": ${hdgst:-false},
00:17:37.196      "ddgst": ${ddgst:-false}
00:17:37.196    },
00:17:37.196    "method": "bdev_nvme_attach_controller"
00:17:37.196  }
00:17:37.196  EOF
00:17:37.196  )")
00:17:37.196     04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat
00:17:37.196    04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq .
00:17:37.196     04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=,
00:17:37.197     04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:17:37.197    "params": {
00:17:37.197      "name": "Nvme1",
00:17:37.197      "trtype": "tcp",
00:17:37.197      "traddr": "10.0.0.2",
00:17:37.197      "adrfam": "ipv4",
00:17:37.197      "trsvcid": "4420",
00:17:37.197      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:17:37.197      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:17:37.197      "hdgst": false,
00:17:37.197      "ddgst": false
00:17:37.197    },
00:17:37.197    "method": "bdev_nvme_attach_controller"
00:17:37.197  }'
00:17:37.197  [2024-12-09 04:08:05.560847] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:17:37.197  [2024-12-09 04:08:05.560923] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid245803 ]
00:17:37.197  [2024-12-09 04:08:05.632521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:17:37.197  [2024-12-09 04:08:05.698305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:37.197  [2024-12-09 04:08:05.698332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:17:37.197  [2024-12-09 04:08:05.698336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:37.761  I/O targets:
00:17:37.761    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:17:37.761  
00:17:37.761  
00:17:37.761       CUnit - A unit testing framework for C - Version 2.1-3
00:17:37.761       http://cunit.sourceforge.net/
00:17:37.761  
00:17:37.761  
00:17:37.761  Suite: bdevio tests on: Nvme1n1
00:17:37.761    Test: blockdev write read block ...passed
00:17:37.761    Test: blockdev write zeroes read block ...passed
00:17:37.761    Test: blockdev write zeroes read no split ...passed
00:17:37.761    Test: blockdev write zeroes read split ...passed
00:17:37.761    Test: blockdev write zeroes read split partial ...passed
00:17:37.761    Test: blockdev reset ...[2024-12-09 04:08:06.175364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:17:37.761  [2024-12-09 04:08:06.175480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c62b0 (9): Bad file descriptor
00:17:37.761  [2024-12-09 04:08:06.195896] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:17:37.761  passed
00:17:37.761    Test: blockdev write read 8 blocks ...passed
00:17:37.761    Test: blockdev write read size > 128k ...passed
00:17:37.761    Test: blockdev write read invalid size ...passed
00:17:37.761    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:17:37.761    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:17:37.761    Test: blockdev write read max offset ...passed
00:17:38.019    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:17:38.019    Test: blockdev writev readv 8 blocks ...passed
00:17:38.019    Test: blockdev writev readv 30 x 1block ...passed
00:17:38.019    Test: blockdev writev readv block ...passed
00:17:38.019    Test: blockdev writev readv size > 128k ...passed
00:17:38.019    Test: blockdev writev readv size > 128k in two iovs ...passed
00:17:38.019    Test: blockdev comparev and writev ...[2024-12-09 04:08:06.409404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.409441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:17:38.019  [2024-12-09 04:08:06.409465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.409483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:17:38.019  [2024-12-09 04:08:06.409813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.409837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:17:38.019  [2024-12-09 04:08:06.409860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.409876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:17:38.019  [2024-12-09 04:08:06.410201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.410226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:17:38.019  [2024-12-09 04:08:06.410248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.410264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:17:38.019  [2024-12-09 04:08:06.410606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.410631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:17:38.019  [2024-12-09 04:08:06.410652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:17:38.019  [2024-12-09 04:08:06.410668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:17:38.019  passed
00:17:38.019    Test: blockdev nvme passthru rw ...passed
00:17:38.019    Test: blockdev nvme passthru vendor specific ...[2024-12-09 04:08:06.493502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:17:38.020  [2024-12-09 04:08:06.493531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:17:38.020  [2024-12-09 04:08:06.493670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:17:38.020  [2024-12-09 04:08:06.493693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:17:38.020  [2024-12-09 04:08:06.493820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:17:38.020  [2024-12-09 04:08:06.493844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:17:38.020  [2024-12-09 04:08:06.493982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:17:38.020  [2024-12-09 04:08:06.494006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:17:38.020  passed
00:17:38.020    Test: blockdev nvme admin passthru ...passed
00:17:38.020    Test: blockdev copy ...passed
00:17:38.020  
00:17:38.020  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:17:38.020                suites      1      1    n/a      0        0
00:17:38.020                 tests     23     23     23      0        0
00:17:38.020               asserts    152    152    152      0      n/a
00:17:38.020  
00:17:38.020  Elapsed time =    0.986 seconds
00:17:38.585   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20}
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:17:38.586  rmmod nvme_tcp
00:17:38.586  rmmod nvme_fabrics
00:17:38.586  rmmod nvme_keyring
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 245695 ']'
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 245695
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 245695 ']'
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 245695
00:17:38.586    04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:38.586    04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 245695
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 245695'
00:17:38.586  killing process with pid 245695
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 245695
00:17:38.586   04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 245695
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:38.844   04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:38.844    04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:17:41.378  
00:17:41.378  real	0m6.822s
00:17:41.378  user	0m11.223s
00:17:41.378  sys	0m2.633s
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:17:41.378  ************************************
00:17:41.378  END TEST nvmf_bdevio_no_huge
00:17:41.378  ************************************
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:17:41.378  ************************************
00:17:41.378  START TEST nvmf_tls
00:17:41.378  ************************************
00:17:41.378   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp
00:17:41.378  * Looking for test storage...
00:17:41.378  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-:
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-:
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<'
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:41.378     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:41.378  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.378  		--rc genhtml_branch_coverage=1
00:17:41.378  		--rc genhtml_function_coverage=1
00:17:41.378  		--rc genhtml_legend=1
00:17:41.378  		--rc geninfo_all_blocks=1
00:17:41.378  		--rc geninfo_unexecuted_blocks=1
00:17:41.378  		
00:17:41.378  		'
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:41.378  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.378  		--rc genhtml_branch_coverage=1
00:17:41.378  		--rc genhtml_function_coverage=1
00:17:41.378  		--rc genhtml_legend=1
00:17:41.378  		--rc geninfo_all_blocks=1
00:17:41.378  		--rc geninfo_unexecuted_blocks=1
00:17:41.378  		
00:17:41.378  		'
00:17:41.378    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:41.378  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.379  		--rc genhtml_branch_coverage=1
00:17:41.379  		--rc genhtml_function_coverage=1
00:17:41.379  		--rc genhtml_legend=1
00:17:41.379  		--rc geninfo_all_blocks=1
00:17:41.379  		--rc geninfo_unexecuted_blocks=1
00:17:41.379  		
00:17:41.379  		'
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:41.379  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.379  		--rc genhtml_branch_coverage=1
00:17:41.379  		--rc genhtml_function_coverage=1
00:17:41.379  		--rc genhtml_legend=1
00:17:41.379  		--rc geninfo_all_blocks=1
00:17:41.379  		--rc geninfo_unexecuted_blocks=1
00:17:41.379  		
00:17:41.379  		'
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:17:41.379     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:41.379     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:17:41.379     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob
00:17:41.379     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:41.379     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:41.379     04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:41.379      04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:41.379      04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:41.379      04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:41.379      04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH
00:17:41.379      04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:17:41.379  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:41.379    04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable
00:17:41.379   04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:17:43.280   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=()
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=()
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=()
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=()
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=()
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=()
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=()
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:17:43.281  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:17:43.281  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:17:43.281  Found net devices under 0000:0a:00.0: cvl_0_0
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:17:43.281  Found net devices under 0000:0a:00.1: cvl_0_1
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:17:43.281   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:17:43.539  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:17:43.539  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms
00:17:43.539  
00:17:43.539  --- 10.0.0.2 ping statistics ---
00:17:43.539  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:43.539  rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:17:43.539  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:17:43.539  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms
00:17:43.539  
00:17:43.539  --- 10.0.0.1 ping statistics ---
00:17:43.539  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:43.539  rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=247923
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 247923
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 247923 ']'
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:43.539  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:43.539   04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:17:43.539  [2024-12-09 04:08:11.962404] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:17:43.539  [2024-12-09 04:08:11.962499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:43.539  [2024-12-09 04:08:12.034790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:43.539  [2024-12-09 04:08:12.088108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:43.539  [2024-12-09 04:08:12.088167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:43.539  [2024-12-09 04:08:12.088189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:17:43.539  [2024-12-09 04:08:12.088200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:17:43.540  [2024-12-09 04:08:12.088209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:43.540  [2024-12-09 04:08:12.088829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']'
00:17:43.797   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl
00:17:44.055  true
00:17:44.055    04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:17:44.055    04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version
00:17:44.312   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0
00:17:44.312   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]]
00:17:44.312   04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:17:44.570    04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:17:44.570    04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version
00:17:44.827   04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13
00:17:44.827   04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]]
00:17:44.827   04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7
00:17:45.084    04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:17:45.084    04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version
00:17:45.341   04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7
00:17:45.341   04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]]
00:17:45.341    04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:17:45.341    04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls
00:17:45.599   04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false
00:17:45.599   04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]]
00:17:45.599   04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls
00:17:45.856    04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:17:45.856    04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls
00:17:46.113   04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true
00:17:46.113   04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]]
00:17:46.113   04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls
00:17:46.371    04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:17:46.371    04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls
00:17:46.936   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false
00:17:46.936   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]]
00:17:46.936    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1
00:17:46.936    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ApVRvtgogO
00:17:46.937    04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lhfP9KGslT
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ApVRvtgogO
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lhfP9KGslT
00:17:46.937   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:17:47.194   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:17:47.451   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ApVRvtgogO
00:17:47.451   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ApVRvtgogO
00:17:47.451   04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:17:47.708  [2024-12-09 04:08:16.248562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:47.708   04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:17:48.272   04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:17:48.272  [2024-12-09 04:08:16.846158] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:17:48.272  [2024-12-09 04:08:16.846492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:48.529   04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:17:48.787  malloc0
00:17:48.787   04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:17:49.044   04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO
00:17:49.300   04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:17:49.558   04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ApVRvtgogO
00:18:01.743  Initializing NVMe Controllers
00:18:01.743  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:18:01.743  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:18:01.743  Initialization complete. Launching workers.
00:18:01.743  ========================================================
00:18:01.743                                                                                                               Latency(us)
00:18:01.743  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:18:01.743  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    8719.79      34.06    7341.62    1130.13    8902.20
00:18:01.743  ========================================================
00:18:01.743  Total                                                                    :    8719.79      34.06    7341.62    1130.13    8902.20
00:18:01.743  
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ApVRvtgogO
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=249828
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 249828 /var/tmp/bdevperf.sock
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 249828 ']'
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:01.743  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:01.743  [2024-12-09 04:08:28.183337] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:01.743  [2024-12-09 04:08:28.183424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249828 ]
00:18:01.743  [2024-12-09 04:08:28.256787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:01.743  [2024-12-09 04:08:28.317932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO
00:18:01.743   04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:18:01.743  [2024-12-09 04:08:28.979424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:01.743  TLSTESTn1
00:18:01.743   04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:18:01.743  Running I/O for 10 seconds...
00:18:02.676       3112.00 IOPS,    12.16 MiB/s
[2024-12-09T03:08:32.187Z]      3296.50 IOPS,    12.88 MiB/s
[2024-12-09T03:08:33.560Z]      3389.33 IOPS,    13.24 MiB/s
[2024-12-09T03:08:34.494Z]      3397.25 IOPS,    13.27 MiB/s
[2024-12-09T03:08:35.432Z]      3394.40 IOPS,    13.26 MiB/s
[2024-12-09T03:08:36.375Z]      3390.67 IOPS,    13.24 MiB/s
[2024-12-09T03:08:37.306Z]      3390.57 IOPS,    13.24 MiB/s
[2024-12-09T03:08:38.237Z]      3414.00 IOPS,    13.34 MiB/s
[2024-12-09T03:08:39.608Z]      3404.11 IOPS,    13.30 MiB/s
[2024-12-09T03:08:39.608Z]      3403.10 IOPS,    13.29 MiB/s
00:18:11.032                                                                                                  Latency(us)
00:18:11.032  
[2024-12-09T03:08:39.608Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:11.032  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:18:11.032  	 Verification LBA range: start 0x0 length 0x2000
00:18:11.032  	 TLSTESTn1           :      10.02    3409.50      13.32       0.00     0.00   37481.12    6068.15   36117.62
00:18:11.032  
[2024-12-09T03:08:39.608Z]  ===================================================================================================================
00:18:11.032  
[2024-12-09T03:08:39.608Z]  Total                       :               3409.50      13.32       0.00     0.00   37481.12    6068.15   36117.62
00:18:11.032  {
00:18:11.032    "results": [
00:18:11.032      {
00:18:11.032        "job": "TLSTESTn1",
00:18:11.032        "core_mask": "0x4",
00:18:11.032        "workload": "verify",
00:18:11.032        "status": "finished",
00:18:11.032        "verify_range": {
00:18:11.032          "start": 0,
00:18:11.032          "length": 8192
00:18:11.032        },
00:18:11.032        "queue_depth": 128,
00:18:11.032        "io_size": 4096,
00:18:11.032        "runtime": 10.018171,
00:18:11.032        "iops": 3409.5045892109447,
00:18:11.032        "mibps": 13.318377301605253,
00:18:11.032        "io_failed": 0,
00:18:11.032        "io_timeout": 0,
00:18:11.032        "avg_latency_us": 37481.11611675499,
00:18:11.032        "min_latency_us": 6068.148148148148,
00:18:11.032        "max_latency_us": 36117.61777777778
00:18:11.032      }
00:18:11.032    ],
00:18:11.032    "core_count": 1
00:18:11.032  }
00:18:11.032   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:18:11.032   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 249828
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 249828 ']'
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 249828
00:18:11.033    04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:11.033    04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 249828
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 249828'
00:18:11.033  killing process with pid 249828
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 249828
00:18:11.033  Received shutdown signal, test time was about 10.000000 seconds
00:18:11.033  
00:18:11.033                                                                                                  Latency(us)
00:18:11.033  
[2024-12-09T03:08:39.609Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:11.033  
[2024-12-09T03:08:39.609Z]  ===================================================================================================================
00:18:11.033  
[2024-12-09T03:08:39.609Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 249828
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhfP9KGslT
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhfP9KGslT
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:11.033    04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhfP9KGslT
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lhfP9KGslT
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251146
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251146 /var/tmp/bdevperf.sock
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251146 ']'
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:11.033  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:11.033   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:11.033  [2024-12-09 04:08:39.549155] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:11.033  [2024-12-09 04:08:39.549243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251146 ]
00:18:11.291  [2024-12-09 04:08:39.620804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:11.291  [2024-12-09 04:08:39.678406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:11.291   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:11.291   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:11.291   04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lhfP9KGslT
00:18:11.549   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:18:11.806  [2024-12-09 04:08:40.334170] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:11.806  [2024-12-09 04:08:40.341284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:18:11.806  [2024-12-09 04:08:40.341581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1033f30 (107): Transport endpoint is not connected
00:18:11.806  [2024-12-09 04:08:40.342570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1033f30 (9): Bad file descriptor
00:18:11.806  [2024-12-09 04:08:40.343569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state
00:18:11.806  [2024-12-09 04:08:40.343604] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:18:11.806  [2024-12-09 04:08:40.343626] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted
00:18:11.806  [2024-12-09 04:08:40.343643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state.
00:18:11.806  request:
00:18:11.806  {
00:18:11.806    "name": "TLSTEST",
00:18:11.806    "trtype": "tcp",
00:18:11.806    "traddr": "10.0.0.2",
00:18:11.806    "adrfam": "ipv4",
00:18:11.806    "trsvcid": "4420",
00:18:11.806    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:11.806    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:11.806    "prchk_reftag": false,
00:18:11.806    "prchk_guard": false,
00:18:11.806    "hdgst": false,
00:18:11.806    "ddgst": false,
00:18:11.806    "psk": "key0",
00:18:11.806    "allow_unrecognized_csi": false,
00:18:11.806    "method": "bdev_nvme_attach_controller",
00:18:11.806    "req_id": 1
00:18:11.806  }
00:18:11.806  Got JSON-RPC error response
00:18:11.806  response:
00:18:11.806  {
00:18:11.806    "code": -5,
00:18:11.806    "message": "Input/output error"
00:18:11.806  }
00:18:11.806   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251146
00:18:11.806   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251146 ']'
00:18:11.806   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251146
00:18:11.806    04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:11.806   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:11.806    04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251146
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251146'
00:18:12.064  killing process with pid 251146
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251146
00:18:12.064  Received shutdown signal, test time was about 10.000000 seconds
00:18:12.064  
00:18:12.064                                                                                                  Latency(us)
00:18:12.064  
[2024-12-09T03:08:40.640Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:12.064  
[2024-12-09T03:08:40.640Z]  ===================================================================================================================
00:18:12.064  
[2024-12-09T03:08:40.640Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251146
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ApVRvtgogO
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ApVRvtgogO
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:12.064    04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ApVRvtgogO
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ApVRvtgogO
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251288
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251288 /var/tmp/bdevperf.sock
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251288 ']'
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:12.064   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:12.065  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:12.065   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:12.065   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:12.322  [2024-12-09 04:08:40.671670] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:12.322  [2024-12-09 04:08:40.671756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251288 ]
00:18:12.322  [2024-12-09 04:08:40.744061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:12.322  [2024-12-09 04:08:40.804526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:12.580   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:12.580   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:12.580   04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO
00:18:12.837   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0
00:18:13.096  [2024-12-09 04:08:41.435210] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:13.096  [2024-12-09 04:08:41.444070] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:18:13.096  [2024-12-09 04:08:41.444104] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:18:13.096  [2024-12-09 04:08:41.444141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:18:13.096  [2024-12-09 04:08:41.444438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4ff30 (107): Transport endpoint is not connected
00:18:13.096  [2024-12-09 04:08:41.445427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4ff30 (9): Bad file descriptor
00:18:13.096  [2024-12-09 04:08:41.446426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state
00:18:13.096  [2024-12-09 04:08:41.446447] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:18:13.096  [2024-12-09 04:08:41.446461] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted
00:18:13.096  [2024-12-09 04:08:41.446478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state.
00:18:13.096  request:
00:18:13.096  {
00:18:13.096    "name": "TLSTEST",
00:18:13.096    "trtype": "tcp",
00:18:13.096    "traddr": "10.0.0.2",
00:18:13.096    "adrfam": "ipv4",
00:18:13.096    "trsvcid": "4420",
00:18:13.096    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:13.096    "hostnqn": "nqn.2016-06.io.spdk:host2",
00:18:13.096    "prchk_reftag": false,
00:18:13.096    "prchk_guard": false,
00:18:13.096    "hdgst": false,
00:18:13.096    "ddgst": false,
00:18:13.096    "psk": "key0",
00:18:13.096    "allow_unrecognized_csi": false,
00:18:13.096    "method": "bdev_nvme_attach_controller",
00:18:13.096    "req_id": 1
00:18:13.096  }
00:18:13.096  Got JSON-RPC error response
00:18:13.096  response:
00:18:13.096  {
00:18:13.096    "code": -5,
00:18:13.096    "message": "Input/output error"
00:18:13.096  }
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251288
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251288 ']'
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251288
00:18:13.096    04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:13.096    04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251288
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251288'
00:18:13.096  killing process with pid 251288
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251288
00:18:13.096  Received shutdown signal, test time was about 10.000000 seconds
00:18:13.096  
00:18:13.096                                                                                                  Latency(us)
00:18:13.096  
[2024-12-09T03:08:41.672Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:13.096  
[2024-12-09T03:08:41.672Z]  ===================================================================================================================
00:18:13.096  
[2024-12-09T03:08:41.672Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:13.096   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251288
00:18:13.355   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:18:13.355   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:18:13.355   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:13.355   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:13.355   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:13.356    04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ApVRvtgogO
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251428
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251428 /var/tmp/bdevperf.sock
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251428 ']'
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:13.356  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:13.356   04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:13.356  [2024-12-09 04:08:41.778806] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:13.356  [2024-12-09 04:08:41.778892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251428 ]
00:18:13.356  [2024-12-09 04:08:41.850925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:13.356  [2024-12-09 04:08:41.908513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:13.614   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:13.614   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:13.614   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO
00:18:13.872   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0
00:18:14.129  [2024-12-09 04:08:42.542699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:14.129  [2024-12-09 04:08:42.548309] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:18:14.130  [2024-12-09 04:08:42.548360] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:18:14.130  [2024-12-09 04:08:42.548412] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:18:14.130  [2024-12-09 04:08:42.548906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9f30 (107): Transport endpoint is not connected
00:18:14.130  [2024-12-09 04:08:42.549896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9f30 (9): Bad file descriptor
00:18:14.130  [2024-12-09 04:08:42.550895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state
00:18:14.130  [2024-12-09 04:08:42.550915] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:18:14.130  [2024-12-09 04:08:42.550938] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted
00:18:14.130  [2024-12-09 04:08:42.550956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state.
00:18:14.130  request:
00:18:14.130  {
00:18:14.130    "name": "TLSTEST",
00:18:14.130    "trtype": "tcp",
00:18:14.130    "traddr": "10.0.0.2",
00:18:14.130    "adrfam": "ipv4",
00:18:14.130    "trsvcid": "4420",
00:18:14.130    "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:18:14.130    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:14.130    "prchk_reftag": false,
00:18:14.130    "prchk_guard": false,
00:18:14.130    "hdgst": false,
00:18:14.130    "ddgst": false,
00:18:14.130    "psk": "key0",
00:18:14.130    "allow_unrecognized_csi": false,
00:18:14.130    "method": "bdev_nvme_attach_controller",
00:18:14.130    "req_id": 1
00:18:14.130  }
00:18:14.130  Got JSON-RPC error response
00:18:14.130  response:
00:18:14.130  {
00:18:14.130    "code": -5,
00:18:14.130    "message": "Input/output error"
00:18:14.130  }
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251428
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251428 ']'
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251428
00:18:14.130    04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:14.130    04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251428
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251428'
00:18:14.130  killing process with pid 251428
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251428
00:18:14.130  Received shutdown signal, test time was about 10.000000 seconds
00:18:14.130  
00:18:14.130                                                                                                  Latency(us)
00:18:14.130  
[2024-12-09T03:08:42.706Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:14.130  
[2024-12-09T03:08:42.706Z]  ===================================================================================================================
00:18:14.130  
[2024-12-09T03:08:42.706Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:14.130   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251428
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:14.388    04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251569
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251569 /var/tmp/bdevperf.sock
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251569 ']'
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:14.388  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:14.388   04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:14.388  [2024-12-09 04:08:42.848710] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:14.388  [2024-12-09 04:08:42.848798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251569 ]
00:18:14.388  [2024-12-09 04:08:42.916343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:14.646  [2024-12-09 04:08:42.976311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:14.646   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:14.646   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:14.646   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 ''
00:18:14.903  [2024-12-09 04:08:43.332977] keyring.c:  24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 
00:18:14.903  [2024-12-09 04:08:43.333025] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:18:14.903  request:
00:18:14.903  {
00:18:14.903    "name": "key0",
00:18:14.903    "path": "",
00:18:14.903    "method": "keyring_file_add_key",
00:18:14.903    "req_id": 1
00:18:14.903  }
00:18:14.903  Got JSON-RPC error response
00:18:14.903  response:
00:18:14.903  {
00:18:14.903    "code": -1,
00:18:14.903    "message": "Operation not permitted"
00:18:14.903  }
00:18:14.903   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:18:15.162  [2024-12-09 04:08:43.597804] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:15.162  [2024-12-09 04:08:43.597868] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0
00:18:15.162  request:
00:18:15.162  {
00:18:15.162    "name": "TLSTEST",
00:18:15.162    "trtype": "tcp",
00:18:15.162    "traddr": "10.0.0.2",
00:18:15.162    "adrfam": "ipv4",
00:18:15.162    "trsvcid": "4420",
00:18:15.162    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:15.162    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:15.162    "prchk_reftag": false,
00:18:15.162    "prchk_guard": false,
00:18:15.162    "hdgst": false,
00:18:15.162    "ddgst": false,
00:18:15.162    "psk": "key0",
00:18:15.162    "allow_unrecognized_csi": false,
00:18:15.162    "method": "bdev_nvme_attach_controller",
00:18:15.162    "req_id": 1
00:18:15.162  }
00:18:15.162  Got JSON-RPC error response
00:18:15.162  response:
00:18:15.162  {
00:18:15.162    "code": -126,
00:18:15.162    "message": "Required key not available"
00:18:15.162  }
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251569
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251569 ']'
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251569
00:18:15.162    04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:15.162    04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251569
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251569'
00:18:15.162  killing process with pid 251569
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251569
00:18:15.162  Received shutdown signal, test time was about 10.000000 seconds
00:18:15.162  
00:18:15.162                                                                                                  Latency(us)
00:18:15.162  
[2024-12-09T03:08:43.738Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:15.162  
[2024-12-09T03:08:43.738Z]  ===================================================================================================================
00:18:15.162  
[2024-12-09T03:08:43.738Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:15.162   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251569
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 247923
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 247923 ']'
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 247923
00:18:15.420    04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:15.420    04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 247923
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 247923'
00:18:15.420  killing process with pid 247923
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 247923
00:18:15.420   04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 247923
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:18:15.678    04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.fbnd8laNn2
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.fbnd8laNn2
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=251842
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 251842
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251842 ']'
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:15.678  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:15.678   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:15.678  [2024-12-09 04:08:44.252139] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:15.678  [2024-12-09 04:08:44.252244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:15.935  [2024-12-09 04:08:44.323514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:15.935  [2024-12-09 04:08:44.378716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:15.935  [2024-12-09 04:08:44.378773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:15.935  [2024-12-09 04:08:44.378810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:15.935  [2024-12-09 04:08:44.378823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:15.935  [2024-12-09 04:08:44.378832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:15.935  [2024-12-09 04:08:44.379406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:15.935   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:15.935   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:15.936   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:15.936   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:15.936   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:16.193   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:16.194   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2
00:18:16.194   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2
00:18:16.194   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:18:16.451  [2024-12-09 04:08:44.772043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:16.451   04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:18:16.711   04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:18:16.967  [2024-12-09 04:08:45.317517] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:16.967  [2024-12-09 04:08:45.317821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:16.968   04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:18:17.226  malloc0
00:18:17.226   04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:18:17.483   04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:17.741   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fbnd8laNn2
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=252126
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 252126 /var/tmp/bdevperf.sock
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 252126 ']'
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:17.999  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:17.999   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:17.999  [2024-12-09 04:08:46.522050] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:17.999  [2024-12-09 04:08:46.522127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252126 ]
00:18:18.257  [2024-12-09 04:08:46.588928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:18.258  [2024-12-09 04:08:46.645453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:18.258   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:18.258   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:18.258   04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:18.514   04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:18:18.772  [2024-12-09 04:08:47.326823] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:19.030  TLSTESTn1
00:18:19.030   04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:18:19.030  Running I/O for 10 seconds...
00:18:21.335       3374.00 IOPS,    13.18 MiB/s
[2024-12-09T03:08:50.844Z]      3400.50 IOPS,    13.28 MiB/s
[2024-12-09T03:08:51.778Z]      3417.67 IOPS,    13.35 MiB/s
[2024-12-09T03:08:52.711Z]      3451.75 IOPS,    13.48 MiB/s
[2024-12-09T03:08:53.643Z]      3440.60 IOPS,    13.44 MiB/s
[2024-12-09T03:08:54.577Z]      3431.17 IOPS,    13.40 MiB/s
[2024-12-09T03:08:55.948Z]      3432.29 IOPS,    13.41 MiB/s
[2024-12-09T03:08:56.878Z]      3383.38 IOPS,    13.22 MiB/s
[2024-12-09T03:08:57.812Z]      3394.33 IOPS,    13.26 MiB/s
[2024-12-09T03:08:57.812Z]      3400.00 IOPS,    13.28 MiB/s
00:18:29.236                                                                                                  Latency(us)
00:18:29.236  
[2024-12-09T03:08:57.812Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:29.236  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:18:29.236  	 Verification LBA range: start 0x0 length 0x2000
00:18:29.236  	 TLSTESTn1           :      10.03    3401.53      13.29       0.00     0.00   37552.26    6310.87   37282.70
00:18:29.236  
[2024-12-09T03:08:57.812Z]  ===================================================================================================================
00:18:29.236  
[2024-12-09T03:08:57.812Z]  Total                       :               3401.53      13.29       0.00     0.00   37552.26    6310.87   37282.70
00:18:29.236  {
00:18:29.236    "results": [
00:18:29.236      {
00:18:29.236        "job": "TLSTESTn1",
00:18:29.236        "core_mask": "0x4",
00:18:29.236        "workload": "verify",
00:18:29.236        "status": "finished",
00:18:29.236        "verify_range": {
00:18:29.236          "start": 0,
00:18:29.236          "length": 8192
00:18:29.236        },
00:18:29.236        "queue_depth": 128,
00:18:29.236        "io_size": 4096,
00:18:29.236        "runtime": 10.032532,
00:18:29.236        "iops": 3401.534129170981,
00:18:29.236        "mibps": 13.287242692074145,
00:18:29.236        "io_failed": 0,
00:18:29.236        "io_timeout": 0,
00:18:29.236        "avg_latency_us": 37552.259868331086,
00:18:29.236        "min_latency_us": 6310.874074074074,
00:18:29.236        "max_latency_us": 37282.70222222222
00:18:29.236      }
00:18:29.236    ],
00:18:29.236    "core_count": 1
00:18:29.236  }
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 252126
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 252126 ']'
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 252126
00:18:29.236    04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:29.236    04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252126
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252126'
00:18:29.236  killing process with pid 252126
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 252126
00:18:29.236  Received shutdown signal, test time was about 10.000000 seconds
00:18:29.236  
00:18:29.236                                                                                                  Latency(us)
00:18:29.236  
[2024-12-09T03:08:57.812Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:29.236  
[2024-12-09T03:08:57.812Z]  ===================================================================================================================
00:18:29.236  
[2024-12-09T03:08:57.812Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:18:29.236   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 252126
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.fbnd8laNn2
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:29.494    04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fbnd8laNn2
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=253428
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 253428 /var/tmp/bdevperf.sock
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 253428 ']'
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:29.494  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:29.494   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:29.495   04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:29.495  [2024-12-09 04:08:57.899884] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:29.495  [2024-12-09 04:08:57.899974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253428 ]
00:18:29.495  [2024-12-09 04:08:57.974359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:29.495  [2024-12-09 04:08:58.035106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:29.752   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:29.752   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:29.752   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:30.010  [2024-12-09 04:08:58.415134] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fbnd8laNn2': 0100666
00:18:30.010  [2024-12-09 04:08:58.415176] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:18:30.010  request:
00:18:30.010  {
00:18:30.010    "name": "key0",
00:18:30.010    "path": "/tmp/tmp.fbnd8laNn2",
00:18:30.010    "method": "keyring_file_add_key",
00:18:30.010    "req_id": 1
00:18:30.010  }
00:18:30.010  Got JSON-RPC error response
00:18:30.010  response:
00:18:30.010  {
00:18:30.010    "code": -1,
00:18:30.010    "message": "Operation not permitted"
00:18:30.010  }
00:18:30.010   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:18:30.267  [2024-12-09 04:08:58.679947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:30.267  [2024-12-09 04:08:58.680013] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0
00:18:30.267  request:
00:18:30.267  {
00:18:30.267    "name": "TLSTEST",
00:18:30.267    "trtype": "tcp",
00:18:30.267    "traddr": "10.0.0.2",
00:18:30.267    "adrfam": "ipv4",
00:18:30.267    "trsvcid": "4420",
00:18:30.267    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:30.267    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:30.267    "prchk_reftag": false,
00:18:30.267    "prchk_guard": false,
00:18:30.267    "hdgst": false,
00:18:30.267    "ddgst": false,
00:18:30.267    "psk": "key0",
00:18:30.267    "allow_unrecognized_csi": false,
00:18:30.267    "method": "bdev_nvme_attach_controller",
00:18:30.267    "req_id": 1
00:18:30.267  }
00:18:30.267  Got JSON-RPC error response
00:18:30.267  response:
00:18:30.267  {
00:18:30.267    "code": -126,
00:18:30.267    "message": "Required key not available"
00:18:30.267  }
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 253428
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 253428 ']'
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 253428
00:18:30.267    04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:30.267    04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253428
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253428'
00:18:30.267  killing process with pid 253428
00:18:30.267   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 253428
00:18:30.267  Received shutdown signal, test time was about 10.000000 seconds
00:18:30.267  
00:18:30.267                                                                                                  Latency(us)
00:18:30.267  
[2024-12-09T03:08:58.843Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:30.268  
[2024-12-09T03:08:58.844Z]  ===================================================================================================================
00:18:30.268  
[2024-12-09T03:08:58.844Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:30.268   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 253428
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 251842
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251842 ']'
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251842
00:18:30.525    04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:30.525    04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251842
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251842'
00:18:30.525  killing process with pid 251842
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251842
00:18:30.525   04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251842
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=253599
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 253599
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 253599 ']'
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:30.782  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:30.782   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:30.783   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:30.783  [2024-12-09 04:08:59.229859] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:30.783  [2024-12-09 04:08:59.229942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:30.783  [2024-12-09 04:08:59.299375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:31.041  [2024-12-09 04:08:59.359827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:31.041  [2024-12-09 04:08:59.359879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:31.041  [2024-12-09 04:08:59.359894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:31.041  [2024-12-09 04:08:59.359915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:31.041  [2024-12-09 04:08:59.359926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:31.041  [2024-12-09 04:08:59.360596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.fbnd8laNn2
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fbnd8laNn2
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:31.041    04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2
00:18:31.041   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:18:31.299  [2024-12-09 04:08:59.809841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:31.299   04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:18:31.864   04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:18:31.864  [2024-12-09 04:09:00.419585] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:31.864  [2024-12-09 04:09:00.419861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:31.864   04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:18:32.429  malloc0
00:18:32.429   04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:18:32.686   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:32.944  [2024-12-09 04:09:01.284903] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fbnd8laNn2': 0100666
00:18:32.944  [2024-12-09 04:09:01.284945] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:18:32.944  request:
00:18:32.944  {
00:18:32.944    "name": "key0",
00:18:32.944    "path": "/tmp/tmp.fbnd8laNn2",
00:18:32.944    "method": "keyring_file_add_key",
00:18:32.944    "req_id": 1
00:18:32.944  }
00:18:32.944  Got JSON-RPC error response
00:18:32.944  response:
00:18:32.944  {
00:18:32.944    "code": -1,
00:18:32.944    "message": "Operation not permitted"
00:18:32.944  }
00:18:32.944   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:18:33.202  [2024-12-09 04:09:01.561671] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist
00:18:33.202  [2024-12-09 04:09:01.561730] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport
00:18:33.202  request:
00:18:33.202  {
00:18:33.202    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:33.202    "host": "nqn.2016-06.io.spdk:host1",
00:18:33.202    "psk": "key0",
00:18:33.202    "method": "nvmf_subsystem_add_host",
00:18:33.202    "req_id": 1
00:18:33.202  }
00:18:33.202  Got JSON-RPC error response
00:18:33.202  response:
00:18:33.202  {
00:18:33.202    "code": -32603,
00:18:33.202    "message": "Internal error"
00:18:33.202  }
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 253599
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 253599 ']'
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 253599
00:18:33.202    04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:33.202    04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253599
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253599'
00:18:33.202  killing process with pid 253599
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 253599
00:18:33.202   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 253599
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.fbnd8laNn2
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=254011
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 254011
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254011 ']'
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:33.460  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:33.460   04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:33.460  [2024-12-09 04:09:01.906361] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:33.460  [2024-12-09 04:09:01.906458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:33.460  [2024-12-09 04:09:01.978861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:33.460  [2024-12-09 04:09:02.032407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:33.460  [2024-12-09 04:09:02.032480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:33.460  [2024-12-09 04:09:02.032505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:33.460  [2024-12-09 04:09:02.032517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:33.460  [2024-12-09 04:09:02.032527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:33.460  [2024-12-09 04:09:02.033176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2
00:18:33.718   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:18:33.975  [2024-12-09 04:09:02.423922] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:33.975   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:18:34.233   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:18:34.490  [2024-12-09 04:09:02.961346] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:34.490  [2024-12-09 04:09:02.961591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:34.490   04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:18:34.747  malloc0
00:18:34.748   04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:18:35.004   04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:35.262   04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=254298
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 254298 /var/tmp/bdevperf.sock
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254298 ']'
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:35.520  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:35.520   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:35.778  [2024-12-09 04:09:04.131236] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:35.778  [2024-12-09 04:09:04.131364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254298 ]
00:18:35.778  [2024-12-09 04:09:04.200093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:35.778  [2024-12-09 04:09:04.259095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:36.035   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:36.036   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:36.036   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:36.294   04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:18:36.551  [2024-12-09 04:09:04.911706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:36.551  TLSTESTn1
00:18:36.551    04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config
00:18:36.808   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{
00:18:36.808    "subsystems": [
00:18:36.808      {
00:18:36.808        "subsystem": "keyring",
00:18:36.808        "config": [
00:18:36.808          {
00:18:36.808            "method": "keyring_file_add_key",
00:18:36.808            "params": {
00:18:36.808              "name": "key0",
00:18:36.808              "path": "/tmp/tmp.fbnd8laNn2"
00:18:36.808            }
00:18:36.808          }
00:18:36.808        ]
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "iobuf",
00:18:36.808        "config": [
00:18:36.808          {
00:18:36.808            "method": "iobuf_set_options",
00:18:36.808            "params": {
00:18:36.808              "small_pool_count": 8192,
00:18:36.808              "large_pool_count": 1024,
00:18:36.808              "small_bufsize": 8192,
00:18:36.808              "large_bufsize": 135168,
00:18:36.808              "enable_numa": false
00:18:36.808            }
00:18:36.808          }
00:18:36.808        ]
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "sock",
00:18:36.808        "config": [
00:18:36.808          {
00:18:36.808            "method": "sock_set_default_impl",
00:18:36.808            "params": {
00:18:36.808              "impl_name": "posix"
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "sock_impl_set_options",
00:18:36.808            "params": {
00:18:36.808              "impl_name": "ssl",
00:18:36.808              "recv_buf_size": 4096,
00:18:36.808              "send_buf_size": 4096,
00:18:36.808              "enable_recv_pipe": true,
00:18:36.808              "enable_quickack": false,
00:18:36.808              "enable_placement_id": 0,
00:18:36.808              "enable_zerocopy_send_server": true,
00:18:36.808              "enable_zerocopy_send_client": false,
00:18:36.808              "zerocopy_threshold": 0,
00:18:36.808              "tls_version": 0,
00:18:36.808              "enable_ktls": false
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "sock_impl_set_options",
00:18:36.808            "params": {
00:18:36.808              "impl_name": "posix",
00:18:36.808              "recv_buf_size": 2097152,
00:18:36.808              "send_buf_size": 2097152,
00:18:36.808              "enable_recv_pipe": true,
00:18:36.808              "enable_quickack": false,
00:18:36.808              "enable_placement_id": 0,
00:18:36.808              "enable_zerocopy_send_server": true,
00:18:36.808              "enable_zerocopy_send_client": false,
00:18:36.808              "zerocopy_threshold": 0,
00:18:36.808              "tls_version": 0,
00:18:36.808              "enable_ktls": false
00:18:36.808            }
00:18:36.808          }
00:18:36.808        ]
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "vmd",
00:18:36.808        "config": []
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "accel",
00:18:36.808        "config": [
00:18:36.808          {
00:18:36.808            "method": "accel_set_options",
00:18:36.808            "params": {
00:18:36.808              "small_cache_size": 128,
00:18:36.808              "large_cache_size": 16,
00:18:36.808              "task_count": 2048,
00:18:36.808              "sequence_count": 2048,
00:18:36.808              "buf_count": 2048
00:18:36.808            }
00:18:36.808          }
00:18:36.808        ]
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "bdev",
00:18:36.808        "config": [
00:18:36.808          {
00:18:36.808            "method": "bdev_set_options",
00:18:36.808            "params": {
00:18:36.808              "bdev_io_pool_size": 65535,
00:18:36.808              "bdev_io_cache_size": 256,
00:18:36.808              "bdev_auto_examine": true,
00:18:36.808              "iobuf_small_cache_size": 128,
00:18:36.808              "iobuf_large_cache_size": 16
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "bdev_raid_set_options",
00:18:36.808            "params": {
00:18:36.808              "process_window_size_kb": 1024,
00:18:36.808              "process_max_bandwidth_mb_sec": 0
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "bdev_iscsi_set_options",
00:18:36.808            "params": {
00:18:36.808              "timeout_sec": 30
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "bdev_nvme_set_options",
00:18:36.808            "params": {
00:18:36.808              "action_on_timeout": "none",
00:18:36.808              "timeout_us": 0,
00:18:36.808              "timeout_admin_us": 0,
00:18:36.808              "keep_alive_timeout_ms": 10000,
00:18:36.808              "arbitration_burst": 0,
00:18:36.808              "low_priority_weight": 0,
00:18:36.808              "medium_priority_weight": 0,
00:18:36.808              "high_priority_weight": 0,
00:18:36.808              "nvme_adminq_poll_period_us": 10000,
00:18:36.808              "nvme_ioq_poll_period_us": 0,
00:18:36.808              "io_queue_requests": 0,
00:18:36.808              "delay_cmd_submit": true,
00:18:36.808              "transport_retry_count": 4,
00:18:36.808              "bdev_retry_count": 3,
00:18:36.808              "transport_ack_timeout": 0,
00:18:36.808              "ctrlr_loss_timeout_sec": 0,
00:18:36.808              "reconnect_delay_sec": 0,
00:18:36.808              "fast_io_fail_timeout_sec": 0,
00:18:36.808              "disable_auto_failback": false,
00:18:36.808              "generate_uuids": false,
00:18:36.808              "transport_tos": 0,
00:18:36.808              "nvme_error_stat": false,
00:18:36.808              "rdma_srq_size": 0,
00:18:36.808              "io_path_stat": false,
00:18:36.808              "allow_accel_sequence": false,
00:18:36.808              "rdma_max_cq_size": 0,
00:18:36.808              "rdma_cm_event_timeout_ms": 0,
00:18:36.808              "dhchap_digests": [
00:18:36.808                "sha256",
00:18:36.808                "sha384",
00:18:36.808                "sha512"
00:18:36.808              ],
00:18:36.808              "dhchap_dhgroups": [
00:18:36.808                "null",
00:18:36.808                "ffdhe2048",
00:18:36.808                "ffdhe3072",
00:18:36.808                "ffdhe4096",
00:18:36.808                "ffdhe6144",
00:18:36.808                "ffdhe8192"
00:18:36.808              ]
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "bdev_nvme_set_hotplug",
00:18:36.808            "params": {
00:18:36.808              "period_us": 100000,
00:18:36.808              "enable": false
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "bdev_malloc_create",
00:18:36.808            "params": {
00:18:36.808              "name": "malloc0",
00:18:36.808              "num_blocks": 8192,
00:18:36.808              "block_size": 4096,
00:18:36.808              "physical_block_size": 4096,
00:18:36.808              "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b",
00:18:36.808              "optimal_io_boundary": 0,
00:18:36.808              "md_size": 0,
00:18:36.808              "dif_type": 0,
00:18:36.808              "dif_is_head_of_md": false,
00:18:36.808              "dif_pi_format": 0
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "bdev_wait_for_examine"
00:18:36.808          }
00:18:36.808        ]
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "nbd",
00:18:36.808        "config": []
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "scheduler",
00:18:36.808        "config": [
00:18:36.808          {
00:18:36.808            "method": "framework_set_scheduler",
00:18:36.808            "params": {
00:18:36.808              "name": "static"
00:18:36.808            }
00:18:36.808          }
00:18:36.808        ]
00:18:36.808      },
00:18:36.808      {
00:18:36.808        "subsystem": "nvmf",
00:18:36.808        "config": [
00:18:36.808          {
00:18:36.808            "method": "nvmf_set_config",
00:18:36.808            "params": {
00:18:36.808              "discovery_filter": "match_any",
00:18:36.808              "admin_cmd_passthru": {
00:18:36.808                "identify_ctrlr": false
00:18:36.808              },
00:18:36.808              "dhchap_digests": [
00:18:36.808                "sha256",
00:18:36.808                "sha384",
00:18:36.808                "sha512"
00:18:36.808              ],
00:18:36.808              "dhchap_dhgroups": [
00:18:36.808                "null",
00:18:36.808                "ffdhe2048",
00:18:36.808                "ffdhe3072",
00:18:36.808                "ffdhe4096",
00:18:36.808                "ffdhe6144",
00:18:36.808                "ffdhe8192"
00:18:36.808              ]
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "nvmf_set_max_subsystems",
00:18:36.808            "params": {
00:18:36.808              "max_subsystems": 1024
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "nvmf_set_crdt",
00:18:36.808            "params": {
00:18:36.808              "crdt1": 0,
00:18:36.808              "crdt2": 0,
00:18:36.808              "crdt3": 0
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "nvmf_create_transport",
00:18:36.808            "params": {
00:18:36.808              "trtype": "TCP",
00:18:36.808              "max_queue_depth": 128,
00:18:36.808              "max_io_qpairs_per_ctrlr": 127,
00:18:36.808              "in_capsule_data_size": 4096,
00:18:36.808              "max_io_size": 131072,
00:18:36.808              "io_unit_size": 131072,
00:18:36.808              "max_aq_depth": 128,
00:18:36.808              "num_shared_buffers": 511,
00:18:36.808              "buf_cache_size": 4294967295,
00:18:36.808              "dif_insert_or_strip": false,
00:18:36.808              "zcopy": false,
00:18:36.808              "c2h_success": false,
00:18:36.808              "sock_priority": 0,
00:18:36.808              "abort_timeout_sec": 1,
00:18:36.808              "ack_timeout": 0,
00:18:36.808              "data_wr_pool_size": 0
00:18:36.808            }
00:18:36.808          },
00:18:36.808          {
00:18:36.808            "method": "nvmf_create_subsystem",
00:18:36.808            "params": {
00:18:36.808              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:36.808              "allow_any_host": false,
00:18:36.808              "serial_number": "SPDK00000000000001",
00:18:36.809              "model_number": "SPDK bdev Controller",
00:18:36.809              "max_namespaces": 10,
00:18:36.809              "min_cntlid": 1,
00:18:36.809              "max_cntlid": 65519,
00:18:36.809              "ana_reporting": false
00:18:36.809            }
00:18:36.809          },
00:18:36.809          {
00:18:36.809            "method": "nvmf_subsystem_add_host",
00:18:36.809            "params": {
00:18:36.809              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:36.809              "host": "nqn.2016-06.io.spdk:host1",
00:18:36.809              "psk": "key0"
00:18:36.809            }
00:18:36.809          },
00:18:36.809          {
00:18:36.809            "method": "nvmf_subsystem_add_ns",
00:18:36.809            "params": {
00:18:36.809              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:36.809              "namespace": {
00:18:36.809                "nsid": 1,
00:18:36.809                "bdev_name": "malloc0",
00:18:36.809                "nguid": "8A7E72C03EBD4675A56EAC193C25A21B",
00:18:36.809                "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b",
00:18:36.809                "no_auto_visible": false
00:18:36.809              }
00:18:36.809            }
00:18:36.809          },
00:18:36.809          {
00:18:36.809            "method": "nvmf_subsystem_add_listener",
00:18:36.809            "params": {
00:18:36.809              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:36.809              "listen_address": {
00:18:36.809                "trtype": "TCP",
00:18:36.809                "adrfam": "IPv4",
00:18:36.809                "traddr": "10.0.0.2",
00:18:36.809                "trsvcid": "4420"
00:18:36.809              },
00:18:36.809              "secure_channel": true
00:18:36.809            }
00:18:36.809          }
00:18:36.809        ]
00:18:36.809      }
00:18:36.809    ]
00:18:36.809  }'
00:18:36.809    04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config
00:18:37.373   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{
00:18:37.373    "subsystems": [
00:18:37.373      {
00:18:37.373        "subsystem": "keyring",
00:18:37.373        "config": [
00:18:37.373          {
00:18:37.373            "method": "keyring_file_add_key",
00:18:37.373            "params": {
00:18:37.373              "name": "key0",
00:18:37.373              "path": "/tmp/tmp.fbnd8laNn2"
00:18:37.373            }
00:18:37.373          }
00:18:37.373        ]
00:18:37.373      },
00:18:37.373      {
00:18:37.373        "subsystem": "iobuf",
00:18:37.373        "config": [
00:18:37.373          {
00:18:37.373            "method": "iobuf_set_options",
00:18:37.373            "params": {
00:18:37.373              "small_pool_count": 8192,
00:18:37.373              "large_pool_count": 1024,
00:18:37.373              "small_bufsize": 8192,
00:18:37.373              "large_bufsize": 135168,
00:18:37.373              "enable_numa": false
00:18:37.373            }
00:18:37.373          }
00:18:37.373        ]
00:18:37.373      },
00:18:37.373      {
00:18:37.373        "subsystem": "sock",
00:18:37.373        "config": [
00:18:37.373          {
00:18:37.373            "method": "sock_set_default_impl",
00:18:37.373            "params": {
00:18:37.373              "impl_name": "posix"
00:18:37.373            }
00:18:37.373          },
00:18:37.373          {
00:18:37.373            "method": "sock_impl_set_options",
00:18:37.373            "params": {
00:18:37.373              "impl_name": "ssl",
00:18:37.373              "recv_buf_size": 4096,
00:18:37.373              "send_buf_size": 4096,
00:18:37.373              "enable_recv_pipe": true,
00:18:37.373              "enable_quickack": false,
00:18:37.373              "enable_placement_id": 0,
00:18:37.373              "enable_zerocopy_send_server": true,
00:18:37.373              "enable_zerocopy_send_client": false,
00:18:37.373              "zerocopy_threshold": 0,
00:18:37.373              "tls_version": 0,
00:18:37.373              "enable_ktls": false
00:18:37.373            }
00:18:37.373          },
00:18:37.373          {
00:18:37.373            "method": "sock_impl_set_options",
00:18:37.373            "params": {
00:18:37.373              "impl_name": "posix",
00:18:37.373              "recv_buf_size": 2097152,
00:18:37.373              "send_buf_size": 2097152,
00:18:37.373              "enable_recv_pipe": true,
00:18:37.373              "enable_quickack": false,
00:18:37.373              "enable_placement_id": 0,
00:18:37.373              "enable_zerocopy_send_server": true,
00:18:37.373              "enable_zerocopy_send_client": false,
00:18:37.373              "zerocopy_threshold": 0,
00:18:37.373              "tls_version": 0,
00:18:37.373              "enable_ktls": false
00:18:37.373            }
00:18:37.373          }
00:18:37.373        ]
00:18:37.373      },
00:18:37.373      {
00:18:37.373        "subsystem": "vmd",
00:18:37.373        "config": []
00:18:37.373      },
00:18:37.373      {
00:18:37.373        "subsystem": "accel",
00:18:37.373        "config": [
00:18:37.373          {
00:18:37.373            "method": "accel_set_options",
00:18:37.373            "params": {
00:18:37.373              "small_cache_size": 128,
00:18:37.373              "large_cache_size": 16,
00:18:37.373              "task_count": 2048,
00:18:37.373              "sequence_count": 2048,
00:18:37.373              "buf_count": 2048
00:18:37.373            }
00:18:37.373          }
00:18:37.373        ]
00:18:37.373      },
00:18:37.373      {
00:18:37.373        "subsystem": "bdev",
00:18:37.373        "config": [
00:18:37.373          {
00:18:37.373            "method": "bdev_set_options",
00:18:37.373            "params": {
00:18:37.373              "bdev_io_pool_size": 65535,
00:18:37.373              "bdev_io_cache_size": 256,
00:18:37.373              "bdev_auto_examine": true,
00:18:37.373              "iobuf_small_cache_size": 128,
00:18:37.373              "iobuf_large_cache_size": 16
00:18:37.373            }
00:18:37.373          },
00:18:37.373          {
00:18:37.373            "method": "bdev_raid_set_options",
00:18:37.373            "params": {
00:18:37.373              "process_window_size_kb": 1024,
00:18:37.373              "process_max_bandwidth_mb_sec": 0
00:18:37.373            }
00:18:37.373          },
00:18:37.373          {
00:18:37.373            "method": "bdev_iscsi_set_options",
00:18:37.373            "params": {
00:18:37.373              "timeout_sec": 30
00:18:37.373            }
00:18:37.373          },
00:18:37.373          {
00:18:37.373            "method": "bdev_nvme_set_options",
00:18:37.373            "params": {
00:18:37.373              "action_on_timeout": "none",
00:18:37.373              "timeout_us": 0,
00:18:37.373              "timeout_admin_us": 0,
00:18:37.373              "keep_alive_timeout_ms": 10000,
00:18:37.373              "arbitration_burst": 0,
00:18:37.373              "low_priority_weight": 0,
00:18:37.373              "medium_priority_weight": 0,
00:18:37.373              "high_priority_weight": 0,
00:18:37.373              "nvme_adminq_poll_period_us": 10000,
00:18:37.373              "nvme_ioq_poll_period_us": 0,
00:18:37.373              "io_queue_requests": 512,
00:18:37.373              "delay_cmd_submit": true,
00:18:37.373              "transport_retry_count": 4,
00:18:37.373              "bdev_retry_count": 3,
00:18:37.373              "transport_ack_timeout": 0,
00:18:37.373              "ctrlr_loss_timeout_sec": 0,
00:18:37.373              "reconnect_delay_sec": 0,
00:18:37.373              "fast_io_fail_timeout_sec": 0,
00:18:37.373              "disable_auto_failback": false,
00:18:37.373              "generate_uuids": false,
00:18:37.373              "transport_tos": 0,
00:18:37.373              "nvme_error_stat": false,
00:18:37.373              "rdma_srq_size": 0,
00:18:37.373              "io_path_stat": false,
00:18:37.373              "allow_accel_sequence": false,
00:18:37.373              "rdma_max_cq_size": 0,
00:18:37.373              "rdma_cm_event_timeout_ms": 0,
00:18:37.373              "dhchap_digests": [
00:18:37.373                "sha256",
00:18:37.373                "sha384",
00:18:37.373                "sha512"
00:18:37.373              ],
00:18:37.373              "dhchap_dhgroups": [
00:18:37.373                "null",
00:18:37.373                "ffdhe2048",
00:18:37.373                "ffdhe3072",
00:18:37.373                "ffdhe4096",
00:18:37.373                "ffdhe6144",
00:18:37.373                "ffdhe8192"
00:18:37.373              ]
00:18:37.373            }
00:18:37.373          },
00:18:37.373          {
00:18:37.373            "method": "bdev_nvme_attach_controller",
00:18:37.373            "params": {
00:18:37.373              "name": "TLSTEST",
00:18:37.373              "trtype": "TCP",
00:18:37.373              "adrfam": "IPv4",
00:18:37.373              "traddr": "10.0.0.2",
00:18:37.373              "trsvcid": "4420",
00:18:37.373              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:37.373              "prchk_reftag": false,
00:18:37.373              "prchk_guard": false,
00:18:37.373              "ctrlr_loss_timeout_sec": 0,
00:18:37.373              "reconnect_delay_sec": 0,
00:18:37.373              "fast_io_fail_timeout_sec": 0,
00:18:37.373              "psk": "key0",
00:18:37.373              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:37.373              "hdgst": false,
00:18:37.373              "ddgst": false,
00:18:37.373              "multipath": "multipath"
00:18:37.373            }
00:18:37.373          },
00:18:37.373          {
00:18:37.373            "method": "bdev_nvme_set_hotplug",
00:18:37.374            "params": {
00:18:37.374              "period_us": 100000,
00:18:37.374              "enable": false
00:18:37.374            }
00:18:37.374          },
00:18:37.374          {
00:18:37.374            "method": "bdev_wait_for_examine"
00:18:37.374          }
00:18:37.374        ]
00:18:37.374      },
00:18:37.374      {
00:18:37.374        "subsystem": "nbd",
00:18:37.374        "config": []
00:18:37.374      }
00:18:37.374    ]
00:18:37.374  }'
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 254298
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254298 ']'
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254298
00:18:37.374    04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:37.374    04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254298
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254298'
00:18:37.374  killing process with pid 254298
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254298
00:18:37.374  Received shutdown signal, test time was about 10.000000 seconds
00:18:37.374  
00:18:37.374                                                                                                  Latency(us)
00:18:37.374  
[2024-12-09T03:09:05.950Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:37.374  
[2024-12-09T03:09:05.950Z]  ===================================================================================================================
00:18:37.374  
[2024-12-09T03:09:05.950Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254298
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 254011
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254011 ']'
00:18:37.374   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254011
00:18:37.374    04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:37.631   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:37.631    04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254011
00:18:37.631   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:37.631   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:37.631   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254011'
00:18:37.631  killing process with pid 254011
00:18:37.631   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254011
00:18:37.631   04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254011
00:18:37.889   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62
00:18:37.889   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:37.889   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:37.889    04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{
00:18:37.889    "subsystems": [
00:18:37.889      {
00:18:37.889        "subsystem": "keyring",
00:18:37.889        "config": [
00:18:37.890          {
00:18:37.890            "method": "keyring_file_add_key",
00:18:37.890            "params": {
00:18:37.890              "name": "key0",
00:18:37.890              "path": "/tmp/tmp.fbnd8laNn2"
00:18:37.890            }
00:18:37.890          }
00:18:37.890        ]
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "iobuf",
00:18:37.890        "config": [
00:18:37.890          {
00:18:37.890            "method": "iobuf_set_options",
00:18:37.890            "params": {
00:18:37.890              "small_pool_count": 8192,
00:18:37.890              "large_pool_count": 1024,
00:18:37.890              "small_bufsize": 8192,
00:18:37.890              "large_bufsize": 135168,
00:18:37.890              "enable_numa": false
00:18:37.890            }
00:18:37.890          }
00:18:37.890        ]
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "sock",
00:18:37.890        "config": [
00:18:37.890          {
00:18:37.890            "method": "sock_set_default_impl",
00:18:37.890            "params": {
00:18:37.890              "impl_name": "posix"
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "sock_impl_set_options",
00:18:37.890            "params": {
00:18:37.890              "impl_name": "ssl",
00:18:37.890              "recv_buf_size": 4096,
00:18:37.890              "send_buf_size": 4096,
00:18:37.890              "enable_recv_pipe": true,
00:18:37.890              "enable_quickack": false,
00:18:37.890              "enable_placement_id": 0,
00:18:37.890              "enable_zerocopy_send_server": true,
00:18:37.890              "enable_zerocopy_send_client": false,
00:18:37.890              "zerocopy_threshold": 0,
00:18:37.890              "tls_version": 0,
00:18:37.890              "enable_ktls": false
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "sock_impl_set_options",
00:18:37.890            "params": {
00:18:37.890              "impl_name": "posix",
00:18:37.890              "recv_buf_size": 2097152,
00:18:37.890              "send_buf_size": 2097152,
00:18:37.890              "enable_recv_pipe": true,
00:18:37.890              "enable_quickack": false,
00:18:37.890              "enable_placement_id": 0,
00:18:37.890              "enable_zerocopy_send_server": true,
00:18:37.890              "enable_zerocopy_send_client": false,
00:18:37.890              "zerocopy_threshold": 0,
00:18:37.890              "tls_version": 0,
00:18:37.890              "enable_ktls": false
00:18:37.890            }
00:18:37.890          }
00:18:37.890        ]
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "vmd",
00:18:37.890        "config": []
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "accel",
00:18:37.890        "config": [
00:18:37.890          {
00:18:37.890            "method": "accel_set_options",
00:18:37.890            "params": {
00:18:37.890              "small_cache_size": 128,
00:18:37.890              "large_cache_size": 16,
00:18:37.890              "task_count": 2048,
00:18:37.890              "sequence_count": 2048,
00:18:37.890              "buf_count": 2048
00:18:37.890            }
00:18:37.890          }
00:18:37.890        ]
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "bdev",
00:18:37.890        "config": [
00:18:37.890          {
00:18:37.890            "method": "bdev_set_options",
00:18:37.890            "params": {
00:18:37.890              "bdev_io_pool_size": 65535,
00:18:37.890              "bdev_io_cache_size": 256,
00:18:37.890              "bdev_auto_examine": true,
00:18:37.890              "iobuf_small_cache_size": 128,
00:18:37.890              "iobuf_large_cache_size": 16
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "bdev_raid_set_options",
00:18:37.890            "params": {
00:18:37.890              "process_window_size_kb": 1024,
00:18:37.890              "process_max_bandwidth_mb_sec": 0
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "bdev_iscsi_set_options",
00:18:37.890            "params": {
00:18:37.890              "timeout_sec": 30
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "bdev_nvme_set_options",
00:18:37.890            "params": {
00:18:37.890              "action_on_timeout": "none",
00:18:37.890              "timeout_us": 0,
00:18:37.890              "timeout_admin_us": 0,
00:18:37.890              "keep_alive_timeout_ms": 10000,
00:18:37.890              "arbitration_burst": 0,
00:18:37.890              "low_priority_weight": 0,
00:18:37.890              "medium_priority_weight": 0,
00:18:37.890              "high_priority_weight": 0,
00:18:37.890              "nvme_adminq_poll_period_us": 10000,
00:18:37.890              "nvme_ioq_poll_period_us": 0,
00:18:37.890              "io_queue_requests": 0,
00:18:37.890              "delay_cmd_submit": true,
00:18:37.890              "transport_retry_count": 4,
00:18:37.890              "bdev_retry_count": 3,
00:18:37.890              "transport_ack_timeout": 0,
00:18:37.890              "ctrlr_loss_timeout_sec": 0,
00:18:37.890              "reconnect_delay_sec": 0,
00:18:37.890              "fast_io_fail_timeout_sec": 0,
00:18:37.890              "disable_auto_failback": false,
00:18:37.890              "generate_uuids": false,
00:18:37.890              "transport_tos": 0,
00:18:37.890              "nvme_error_stat": false,
00:18:37.890              "rdma_srq_size": 0,
00:18:37.890              "io_path_stat": false,
00:18:37.890              "allow_accel_sequence": false,
00:18:37.890              "rdma_max_cq_size": 0,
00:18:37.890              "rdma_cm_event_timeout_ms": 0,
00:18:37.890              "dhchap_digests": [
00:18:37.890                "sha256",
00:18:37.890                "sha384",
00:18:37.890                "sha512"
00:18:37.890              ],
00:18:37.890              "dhchap_dhgroups": [
00:18:37.890                "null",
00:18:37.890                "ffdhe2048",
00:18:37.890                "ffdhe3072",
00:18:37.890                "ffdhe4096",
00:18:37.890                "ffdhe6144",
00:18:37.890                "ffdhe8192"
00:18:37.890              ]
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "bdev_nvme_set_hotplug",
00:18:37.890            "params": {
00:18:37.890              "period_us": 100000,
00:18:37.890              "enable": false
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "bdev_malloc_create",
00:18:37.890            "params": {
00:18:37.890              "name": "malloc0",
00:18:37.890              "num_blocks": 8192,
00:18:37.890              "block_size": 4096,
00:18:37.890              "physical_block_size": 4096,
00:18:37.890              "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b",
00:18:37.890              "optimal_io_boundary": 0,
00:18:37.890              "md_size": 0,
00:18:37.890              "dif_type": 0,
00:18:37.890              "dif_is_head_of_md": false,
00:18:37.890              "dif_pi_format": 0
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "bdev_wait_for_examine"
00:18:37.890          }
00:18:37.890        ]
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "nbd",
00:18:37.890        "config": []
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "scheduler",
00:18:37.890        "config": [
00:18:37.890          {
00:18:37.890            "method": "framework_set_scheduler",
00:18:37.890            "params": {
00:18:37.890              "name": "static"
00:18:37.890            }
00:18:37.890          }
00:18:37.890        ]
00:18:37.890      },
00:18:37.890      {
00:18:37.890        "subsystem": "nvmf",
00:18:37.890        "config": [
00:18:37.890          {
00:18:37.890            "method": "nvmf_set_config",
00:18:37.890            "params": {
00:18:37.890              "discovery_filter": "match_any",
00:18:37.890              "admin_cmd_passthru": {
00:18:37.890                "identify_ctrlr": false
00:18:37.890              },
00:18:37.890              "dhchap_digests": [
00:18:37.890                "sha256",
00:18:37.890                "sha384",
00:18:37.890                "sha512"
00:18:37.890              ],
00:18:37.890              "dhchap_dhgroups": [
00:18:37.890                "null",
00:18:37.890                "ffdhe2048",
00:18:37.890                "ffdhe3072",
00:18:37.890                "ffdhe4096",
00:18:37.890                "ffdhe6144",
00:18:37.890                "ffdhe8192"
00:18:37.890              ]
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "nvmf_set_max_subsystems",
00:18:37.890            "params": {
00:18:37.890              "max_subsystems": 1024
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "nvmf_set_crdt",
00:18:37.890            "params": {
00:18:37.890              "crdt1": 0,
00:18:37.890              "crdt2": 0,
00:18:37.890              "crdt3": 0
00:18:37.890            }
00:18:37.890          },
00:18:37.890          {
00:18:37.890            "method": "nvmf_create_transport",
00:18:37.890            "params": {
00:18:37.890              "trtype": "TCP",
00:18:37.890              "max_queue_depth": 128,
00:18:37.891              "max_io_qpairs_per_ctrlr": 127,
00:18:37.891              "in_capsule_data_size": 4096,
00:18:37.891              "max_io_size": 131072,
00:18:37.891              "io_unit_size": 131072,
00:18:37.891              "max_aq_depth": 128,
00:18:37.891              "num_shared_buffers": 511,
00:18:37.891              "buf_cache_size": 4294967295,
00:18:37.891              "dif_insert_or_strip": false,
00:18:37.891              "zcopy": false,
00:18:37.891              "c2h_success": false,
00:18:37.891              "sock_priority": 0,
00:18:37.891              "abort_timeout_sec": 1,
00:18:37.891              "ack_timeout": 0,
00:18:37.891              "data_wr_pool_size": 0
00:18:37.891            }
00:18:37.891          },
00:18:37.891          {
00:18:37.891            "method": "nvmf_create_subsystem",
00:18:37.891            "params": {
00:18:37.891              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:37.891              "allow_any_host": false,
00:18:37.891              "serial_number": "SPDK00000000000001",
00:18:37.891              "model_number": "SPDK bdev Controller",
00:18:37.891              "max_namespaces": 10,
00:18:37.891              "min_cntlid": 1,
00:18:37.891              "max_cntlid": 65519,
00:18:37.891              "ana_reporting": false
00:18:37.891            }
00:18:37.891          },
00:18:37.891          {
00:18:37.891            "method": "nvmf_subsystem_add_host",
00:18:37.891            "params": {
00:18:37.891              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:37.891              "host": "nqn.2016-06.io.spdk:host1",
00:18:37.891              "psk": "key0"
00:18:37.891            }
00:18:37.891          },
00:18:37.891          {
00:18:37.891            "method": "nvmf_subsystem_add_ns",
00:18:37.891            "params": {
00:18:37.891              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:37.891              "namespace": {
00:18:37.891                "nsid": 1,
00:18:37.891                "bdev_name": "malloc0",
00:18:37.891                "nguid": "8A7E72C03EBD4675A56EAC193C25A21B",
00:18:37.891                "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b",
00:18:37.891                "no_auto_visible": false
00:18:37.891              }
00:18:37.891            }
00:18:37.891          },
00:18:37.891          {
00:18:37.891            "method": "nvmf_subsystem_add_listener",
00:18:37.891            "params": {
00:18:37.891              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:37.891              "listen_address": {
00:18:37.891                "trtype": "TCP",
00:18:37.891                "adrfam": "IPv4",
00:18:37.891                "traddr": "10.0.0.2",
00:18:37.891                "trsvcid": "4420"
00:18:37.891              },
00:18:37.891              "secure_channel": true
00:18:37.891            }
00:18:37.891          }
00:18:37.891        ]
00:18:37.891      }
00:18:37.891    ]
00:18:37.891  }'
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=254576
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 254576
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254576 ']'
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:37.891  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:37.891   04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:37.891  [2024-12-09 04:09:06.281727] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:37.891  [2024-12-09 04:09:06.281796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:37.891  [2024-12-09 04:09:06.357091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:37.891  [2024-12-09 04:09:06.414463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:37.891  [2024-12-09 04:09:06.414533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:37.891  [2024-12-09 04:09:06.414547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:37.891  [2024-12-09 04:09:06.414558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:37.891  [2024-12-09 04:09:06.414568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:37.891  [2024-12-09 04:09:06.415192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:38.148  [2024-12-09 04:09:06.654209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:38.148  [2024-12-09 04:09:06.686232] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:38.148  [2024-12-09 04:09:06.686512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=254888
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 254888 /var/tmp/bdevperf.sock
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254888 ']'
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63
00:18:39.081   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:39.081    04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{
00:18:39.081    "subsystems": [
00:18:39.081      {
00:18:39.081        "subsystem": "keyring",
00:18:39.081        "config": [
00:18:39.081          {
00:18:39.081            "method": "keyring_file_add_key",
00:18:39.081            "params": {
00:18:39.081              "name": "key0",
00:18:39.081              "path": "/tmp/tmp.fbnd8laNn2"
00:18:39.081            }
00:18:39.081          }
00:18:39.081        ]
00:18:39.081      },
00:18:39.081      {
00:18:39.081        "subsystem": "iobuf",
00:18:39.081        "config": [
00:18:39.081          {
00:18:39.081            "method": "iobuf_set_options",
00:18:39.081            "params": {
00:18:39.081              "small_pool_count": 8192,
00:18:39.081              "large_pool_count": 1024,
00:18:39.081              "small_bufsize": 8192,
00:18:39.081              "large_bufsize": 135168,
00:18:39.081              "enable_numa": false
00:18:39.082            }
00:18:39.082          }
00:18:39.082        ]
00:18:39.082      },
00:18:39.082      {
00:18:39.082        "subsystem": "sock",
00:18:39.082        "config": [
00:18:39.082          {
00:18:39.082            "method": "sock_set_default_impl",
00:18:39.082            "params": {
00:18:39.082              "impl_name": "posix"
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "sock_impl_set_options",
00:18:39.082            "params": {
00:18:39.082              "impl_name": "ssl",
00:18:39.082              "recv_buf_size": 4096,
00:18:39.082              "send_buf_size": 4096,
00:18:39.082              "enable_recv_pipe": true,
00:18:39.082              "enable_quickack": false,
00:18:39.082              "enable_placement_id": 0,
00:18:39.082              "enable_zerocopy_send_server": true,
00:18:39.082              "enable_zerocopy_send_client": false,
00:18:39.082              "zerocopy_threshold": 0,
00:18:39.082              "tls_version": 0,
00:18:39.082              "enable_ktls": false
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "sock_impl_set_options",
00:18:39.082            "params": {
00:18:39.082              "impl_name": "posix",
00:18:39.082              "recv_buf_size": 2097152,
00:18:39.082              "send_buf_size": 2097152,
00:18:39.082              "enable_recv_pipe": true,
00:18:39.082              "enable_quickack": false,
00:18:39.082              "enable_placement_id": 0,
00:18:39.082              "enable_zerocopy_send_server": true,
00:18:39.082              "enable_zerocopy_send_client": false,
00:18:39.082              "zerocopy_threshold": 0,
00:18:39.082              "tls_version": 0,
00:18:39.082              "enable_ktls": false
00:18:39.082            }
00:18:39.082          }
00:18:39.082        ]
00:18:39.082      },
00:18:39.082      {
00:18:39.082        "subsystem": "vmd",
00:18:39.082        "config": []
00:18:39.082      },
00:18:39.082      {
00:18:39.082        "subsystem": "accel",
00:18:39.082        "config": [
00:18:39.082          {
00:18:39.082            "method": "accel_set_options",
00:18:39.082            "params": {
00:18:39.082              "small_cache_size": 128,
00:18:39.082              "large_cache_size": 16,
00:18:39.082              "task_count": 2048,
00:18:39.082              "sequence_count": 2048,
00:18:39.082              "buf_count": 2048
00:18:39.082            }
00:18:39.082          }
00:18:39.082        ]
00:18:39.082      },
00:18:39.082      {
00:18:39.082        "subsystem": "bdev",
00:18:39.082        "config": [
00:18:39.082          {
00:18:39.082            "method": "bdev_set_options",
00:18:39.082            "params": {
00:18:39.082              "bdev_io_pool_size": 65535,
00:18:39.082              "bdev_io_cache_size": 256,
00:18:39.082              "bdev_auto_examine": true,
00:18:39.082              "iobuf_small_cache_size": 128,
00:18:39.082              "iobuf_large_cache_size": 16
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "bdev_raid_set_options",
00:18:39.082            "params": {
00:18:39.082              "process_window_size_kb": 1024,
00:18:39.082              "process_max_bandwidth_mb_sec": 0
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "bdev_iscsi_set_options",
00:18:39.082            "params": {
00:18:39.082              "timeout_sec": 30
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "bdev_nvme_set_options",
00:18:39.082            "params": {
00:18:39.082              "action_on_timeout": "none",
00:18:39.082              "timeout_us": 0,
00:18:39.082              "timeout_admin_us": 0,
00:18:39.082              "keep_alive_timeout_ms": 10000,
00:18:39.082              "arbitration_burst": 0,
00:18:39.082              "low_priority_weight": 0,
00:18:39.082              "medium_priority_weight": 0,
00:18:39.082              "high_priority_weight": 0,
00:18:39.082              "nvme_adminq_poll_period_us": 10000,
00:18:39.082              "nvme_ioq_poll_period_us": 0,
00:18:39.082              "io_queue_requests": 512,
00:18:39.082              "delay_cmd_submit": true,
00:18:39.082              "transport_retry_count": 4,
00:18:39.082              "bdev_retry_count": 3,
00:18:39.082              "transport_ack_timeout": 0,
00:18:39.082              "ctrlr_loss_timeout_sec": 0,
00:18:39.082              "reconnect_delay_sec": 0,
00:18:39.082              "fast_io_fail_timeout_sec": 0,
00:18:39.082              "disable_auto_failback": false,
00:18:39.082              "generate_uuids": false,
00:18:39.082              "transport_tos": 0,
00:18:39.082              "nvme_error_stat": false,
00:18:39.082              "rdma_srq_size": 0,
00:18:39.082              "io_path_stat": false,
00:18:39.082              "allow_accel_sequence": false,
00:18:39.082              "rdma_max_cq_size": 0,
00:18:39.082              "rdma_cm_event_timeout_ms": 0,
00:18:39.082              "dhchap_digests": [
00:18:39.082                "sha256",
00:18:39.082                "sha384",
00:18:39.082                "sha512"
00:18:39.082              ],
00:18:39.082              "dhchap_dhgroups": [
00:18:39.082                "null",
00:18:39.082                "ffdhe2048",
00:18:39.082                "ffdhe3072",
00:18:39.082                "ffdhe4096",
00:18:39.082                "ffdhe6144",
00:18:39.082                "ffdhe8192"
00:18:39.082              ]
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "bdev_nvme_attach_controller",
00:18:39.082            "params": {
00:18:39.082              "name": "TLSTEST",
00:18:39.082              "trtype": "TCP",
00:18:39.082              "adrfam": "IPv4",
00:18:39.082              "traddr": "10.0.0.2",
00:18:39.082              "trsvcid": "4420",
00:18:39.082              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:39.082              "prchk_reftag": false,
00:18:39.082              "prchk_guard": false,
00:18:39.082              "ctrlr_loss_timeout_sec": 0,
00:18:39.082              "reconnect_delay_sec": 0,
00:18:39.082              "fast_io_fail_timeout_sec": 0,
00:18:39.082              "psk": "key0",
00:18:39.082              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:39.082              "hdgst": false,
00:18:39.082              "ddgst": false,
00:18:39.082              "multipath": "multipath"
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "bdev_nvme_set_hotplug",
00:18:39.082            "params": {
00:18:39.082              "period_us": 100000,
00:18:39.082              "enable": false
00:18:39.082            }
00:18:39.082          },
00:18:39.082          {
00:18:39.082            "method": "bdev_wait_for_examine"
00:18:39.082          }
00:18:39.082        ]
00:18:39.082      },
00:18:39.082      {
00:18:39.082        "subsystem": "nbd",
00:18:39.082        "config": []
00:18:39.082      }
00:18:39.082    ]
00:18:39.082  }'
00:18:39.082   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:39.082  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:39.082   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:39.082   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:39.082  [2024-12-09 04:09:07.365827] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:39.082  [2024-12-09 04:09:07.365920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254888 ]
00:18:39.082  [2024-12-09 04:09:07.432919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:39.082  [2024-12-09 04:09:07.490389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:39.340  [2024-12-09 04:09:07.668034] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:39.340   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:39.340   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:39.340   04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:18:39.340  Running I/O for 10 seconds...
00:18:41.646       3219.00 IOPS,    12.57 MiB/s
[2024-12-09T03:09:11.154Z]      3312.50 IOPS,    12.94 MiB/s
[2024-12-09T03:09:12.085Z]      3271.67 IOPS,    12.78 MiB/s
[2024-12-09T03:09:13.017Z]      3310.00 IOPS,    12.93 MiB/s
[2024-12-09T03:09:13.949Z]      3347.00 IOPS,    13.07 MiB/s
[2024-12-09T03:09:15.320Z]      3354.83 IOPS,    13.10 MiB/s
[2024-12-09T03:09:16.251Z]      3365.43 IOPS,    13.15 MiB/s
[2024-12-09T03:09:17.182Z]      3372.88 IOPS,    13.18 MiB/s
[2024-12-09T03:09:18.115Z]      3375.56 IOPS,    13.19 MiB/s
[2024-12-09T03:09:18.115Z]      3373.70 IOPS,    13.18 MiB/s
00:18:49.539                                                                                                  Latency(us)
00:18:49.539  
[2024-12-09T03:09:18.115Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:49.539  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:18:49.539  	 Verification LBA range: start 0x0 length 0x2000
00:18:49.539  	 TLSTESTn1           :      10.03    3375.70      13.19       0.00     0.00   37840.89    8932.31   51263.72
00:18:49.539  
[2024-12-09T03:09:18.115Z]  ===================================================================================================================
00:18:49.539  
[2024-12-09T03:09:18.115Z]  Total                       :               3375.70      13.19       0.00     0.00   37840.89    8932.31   51263.72
00:18:49.539  {
00:18:49.539    "results": [
00:18:49.539      {
00:18:49.539        "job": "TLSTESTn1",
00:18:49.539        "core_mask": "0x4",
00:18:49.539        "workload": "verify",
00:18:49.539        "status": "finished",
00:18:49.539        "verify_range": {
00:18:49.539          "start": 0,
00:18:49.539          "length": 8192
00:18:49.539        },
00:18:49.539        "queue_depth": 128,
00:18:49.539        "io_size": 4096,
00:18:49.539        "runtime": 10.03169,
00:18:49.539        "iops": 3375.7023990972607,
00:18:49.539        "mibps": 13.186337496473675,
00:18:49.539        "io_failed": 0,
00:18:49.539        "io_timeout": 0,
00:18:49.539        "avg_latency_us": 37840.89085719785,
00:18:49.539        "min_latency_us": 8932.314074074075,
00:18:49.539        "max_latency_us": 51263.71555555556
00:18:49.539      }
00:18:49.539    ],
00:18:49.539    "core_count": 1
00:18:49.539  }
00:18:49.539   04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:18:49.539   04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 254888
00:18:49.539   04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254888 ']'
00:18:49.539   04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254888
00:18:49.539    04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:49.539   04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:49.539    04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254888
00:18:49.539   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:18:49.539   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:18:49.539   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254888'
00:18:49.539  killing process with pid 254888
00:18:49.539   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254888
00:18:49.539  Received shutdown signal, test time was about 10.000000 seconds
00:18:49.539  
00:18:49.539                                                                                                  Latency(us)
00:18:49.539  
[2024-12-09T03:09:18.115Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:49.539  
[2024-12-09T03:09:18.115Z]  ===================================================================================================================
00:18:49.539  
[2024-12-09T03:09:18.115Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:18:49.539   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254888
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 254576
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254576 ']'
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254576
00:18:49.796    04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:49.796    04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254576
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254576'
00:18:49.796  killing process with pid 254576
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254576
00:18:49.796   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254576
00:18:50.054   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart
00:18:50.054   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:50.054   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=256559
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 256559
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 256559 ']'
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:50.055  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:50.055   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:50.055  [2024-12-09 04:09:18.579459] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:50.055  [2024-12-09 04:09:18.579557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:50.313  [2024-12-09 04:09:18.652755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:50.313  [2024-12-09 04:09:18.706297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:50.313  [2024-12-09 04:09:18.706364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:50.313  [2024-12-09 04:09:18.706387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:50.313  [2024-12-09 04:09:18.706398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:50.313  [2024-12-09 04:09:18.706407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:50.313  [2024-12-09 04:09:18.706948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2
00:18:50.313   04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:18:50.570  [2024-12-09 04:09:19.080516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:50.570   04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:18:50.827   04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:18:51.084  [2024-12-09 04:09:19.629993] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:51.084  [2024-12-09 04:09:19.630266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:51.084   04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:18:51.341  malloc0
00:18:51.598   04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:18:51.856   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:52.113   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=256850
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 256850 /var/tmp/bdevperf.sock
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 256850 ']'
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:52.371  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:52.371   04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:52.371  [2024-12-09 04:09:20.806471] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:52.371  [2024-12-09 04:09:20.806557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256850 ]
00:18:52.371  [2024-12-09 04:09:20.876296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:52.371  [2024-12-09 04:09:20.935457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:52.628   04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:52.629   04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:52.629   04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:52.885   04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1
00:18:53.142  [2024-12-09 04:09:21.575936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:53.142  nvme0n1
00:18:53.143   04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:18:53.400  Running I/O for 1 seconds...
00:18:54.331       3230.00 IOPS,    12.62 MiB/s
00:18:54.331                                                                                                  Latency(us)
00:18:54.331  
[2024-12-09T03:09:22.907Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:54.331  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:18:54.331  	 Verification LBA range: start 0x0 length 0x2000
00:18:54.331  	 nvme0n1             :       1.03    3269.99      12.77       0.00     0.00   38714.47    6310.87   44855.75
00:18:54.331  
[2024-12-09T03:09:22.907Z]  ===================================================================================================================
00:18:54.331  
[2024-12-09T03:09:22.907Z]  Total                       :               3269.99      12.77       0.00     0.00   38714.47    6310.87   44855.75
00:18:54.331  {
00:18:54.332    "results": [
00:18:54.332      {
00:18:54.332        "job": "nvme0n1",
00:18:54.332        "core_mask": "0x2",
00:18:54.332        "workload": "verify",
00:18:54.332        "status": "finished",
00:18:54.332        "verify_range": {
00:18:54.332          "start": 0,
00:18:54.332          "length": 8192
00:18:54.332        },
00:18:54.332        "queue_depth": 128,
00:18:54.332        "io_size": 4096,
00:18:54.332        "runtime": 1.026916,
00:18:54.332        "iops": 3269.9850815451314,
00:18:54.332        "mibps": 12.77337922478567,
00:18:54.332        "io_failed": 0,
00:18:54.332        "io_timeout": 0,
00:18:54.332        "avg_latency_us": 38714.47436878212,
00:18:54.332        "min_latency_us": 6310.874074074074,
00:18:54.332        "max_latency_us": 44855.75111111111
00:18:54.332      }
00:18:54.332    ],
00:18:54.332    "core_count": 1
00:18:54.332  }
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 256850
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 256850 ']'
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 256850
00:18:54.332    04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:54.332    04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256850
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256850'
00:18:54.332  killing process with pid 256850
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 256850
00:18:54.332  Received shutdown signal, test time was about 1.000000 seconds
00:18:54.332  
00:18:54.332                                                                                                  Latency(us)
00:18:54.332  
[2024-12-09T03:09:22.908Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:54.332  
[2024-12-09T03:09:22.908Z]  ===================================================================================================================
00:18:54.332  
[2024-12-09T03:09:22.908Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:18:54.332   04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 256850
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 256559
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 256559 ']'
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 256559
00:18:54.589    04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:54.589    04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256559
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256559'
00:18:54.589  killing process with pid 256559
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 256559
00:18:54.589   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 256559
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=257126
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 257126
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257126 ']'
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:54.848  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:54.848   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:55.107  [2024-12-09 04:09:23.428958] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:55.107  [2024-12-09 04:09:23.429055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:55.107  [2024-12-09 04:09:23.498332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:55.107  [2024-12-09 04:09:23.551068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:55.107  [2024-12-09 04:09:23.551117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:55.107  [2024-12-09 04:09:23.551141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:55.107  [2024-12-09 04:09:23.551151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:55.107  [2024-12-09 04:09:23.551161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:55.107  [2024-12-09 04:09:23.551744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:55.107   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:55.107   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:55.107   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:55.107   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:55.107   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:55.364  [2024-12-09 04:09:23.686465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:55.364  malloc0
00:18:55.364  [2024-12-09 04:09:23.717513] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:55.364  [2024-12-09 04:09:23.717806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=257241
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 257241 /var/tmp/bdevperf.sock
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257241 ']'
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:55.364  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:55.364   04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:55.364  [2024-12-09 04:09:23.791461] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:55.364  [2024-12-09 04:09:23.791546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257241 ]
00:18:55.364  [2024-12-09 04:09:23.860069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:55.364  [2024-12-09 04:09:23.916766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:55.621   04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:55.621   04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:55.621   04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2
00:18:55.878   04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1
00:18:56.135  [2024-12-09 04:09:24.529000] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:56.135  nvme0n1
00:18:56.135   04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:18:56.401  Running I/O for 1 seconds...
00:18:57.339       3535.00 IOPS,    13.81 MiB/s
00:18:57.339                                                                                                  Latency(us)
00:18:57.339  
[2024-12-09T03:09:25.915Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:57.339  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:18:57.339  	 Verification LBA range: start 0x0 length 0x2000
00:18:57.339  	 nvme0n1             :       1.02    3595.09      14.04       0.00     0.00   35263.62    6019.60   43496.49
00:18:57.339  
[2024-12-09T03:09:25.915Z]  ===================================================================================================================
00:18:57.339  
[2024-12-09T03:09:25.915Z]  Total                       :               3595.09      14.04       0.00     0.00   35263.62    6019.60   43496.49
00:18:57.339  {
00:18:57.339    "results": [
00:18:57.339      {
00:18:57.339        "job": "nvme0n1",
00:18:57.339        "core_mask": "0x2",
00:18:57.339        "workload": "verify",
00:18:57.339        "status": "finished",
00:18:57.339        "verify_range": {
00:18:57.339          "start": 0,
00:18:57.339          "length": 8192
00:18:57.339        },
00:18:57.339        "queue_depth": 128,
00:18:57.339        "io_size": 4096,
00:18:57.339        "runtime": 1.018891,
00:18:57.339        "iops": 3595.0852446434405,
00:18:57.339        "mibps": 14.04330173688844,
00:18:57.339        "io_failed": 0,
00:18:57.339        "io_timeout": 0,
00:18:57.339        "avg_latency_us": 35263.62076662521,
00:18:57.339        "min_latency_us": 6019.602962962963,
00:18:57.339        "max_latency_us": 43496.485925925925
00:18:57.339      }
00:18:57.339    ],
00:18:57.339    "core_count": 1
00:18:57.340  }
00:18:57.340    04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config
00:18:57.340    04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:57.340    04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:57.340    04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:57.340   04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{
00:18:57.340  "subsystems": [
00:18:57.340  {
00:18:57.340  "subsystem": "keyring",
00:18:57.340  "config": [
00:18:57.340  {
00:18:57.340  "method": "keyring_file_add_key",
00:18:57.340  "params": {
00:18:57.340  "name": "key0",
00:18:57.340  "path": "/tmp/tmp.fbnd8laNn2"
00:18:57.340  }
00:18:57.340  }
00:18:57.340  ]
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "iobuf",
00:18:57.340  "config": [
00:18:57.340  {
00:18:57.340  "method": "iobuf_set_options",
00:18:57.340  "params": {
00:18:57.340  "small_pool_count": 8192,
00:18:57.340  "large_pool_count": 1024,
00:18:57.340  "small_bufsize": 8192,
00:18:57.340  "large_bufsize": 135168,
00:18:57.340  "enable_numa": false
00:18:57.340  }
00:18:57.340  }
00:18:57.340  ]
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "sock",
00:18:57.340  "config": [
00:18:57.340  {
00:18:57.340  "method": "sock_set_default_impl",
00:18:57.340  "params": {
00:18:57.340  "impl_name": "posix"
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "sock_impl_set_options",
00:18:57.340  "params": {
00:18:57.340  "impl_name": "ssl",
00:18:57.340  "recv_buf_size": 4096,
00:18:57.340  "send_buf_size": 4096,
00:18:57.340  "enable_recv_pipe": true,
00:18:57.340  "enable_quickack": false,
00:18:57.340  "enable_placement_id": 0,
00:18:57.340  "enable_zerocopy_send_server": true,
00:18:57.340  "enable_zerocopy_send_client": false,
00:18:57.340  "zerocopy_threshold": 0,
00:18:57.340  "tls_version": 0,
00:18:57.340  "enable_ktls": false
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "sock_impl_set_options",
00:18:57.340  "params": {
00:18:57.340  "impl_name": "posix",
00:18:57.340  "recv_buf_size": 2097152,
00:18:57.340  "send_buf_size": 2097152,
00:18:57.340  "enable_recv_pipe": true,
00:18:57.340  "enable_quickack": false,
00:18:57.340  "enable_placement_id": 0,
00:18:57.340  "enable_zerocopy_send_server": true,
00:18:57.340  "enable_zerocopy_send_client": false,
00:18:57.340  "zerocopy_threshold": 0,
00:18:57.340  "tls_version": 0,
00:18:57.340  "enable_ktls": false
00:18:57.340  }
00:18:57.340  }
00:18:57.340  ]
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "vmd",
00:18:57.340  "config": []
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "accel",
00:18:57.340  "config": [
00:18:57.340  {
00:18:57.340  "method": "accel_set_options",
00:18:57.340  "params": {
00:18:57.340  "small_cache_size": 128,
00:18:57.340  "large_cache_size": 16,
00:18:57.340  "task_count": 2048,
00:18:57.340  "sequence_count": 2048,
00:18:57.340  "buf_count": 2048
00:18:57.340  }
00:18:57.340  }
00:18:57.340  ]
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "bdev",
00:18:57.340  "config": [
00:18:57.340  {
00:18:57.340  "method": "bdev_set_options",
00:18:57.340  "params": {
00:18:57.340  "bdev_io_pool_size": 65535,
00:18:57.340  "bdev_io_cache_size": 256,
00:18:57.340  "bdev_auto_examine": true,
00:18:57.340  "iobuf_small_cache_size": 128,
00:18:57.340  "iobuf_large_cache_size": 16
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "bdev_raid_set_options",
00:18:57.340  "params": {
00:18:57.340  "process_window_size_kb": 1024,
00:18:57.340  "process_max_bandwidth_mb_sec": 0
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "bdev_iscsi_set_options",
00:18:57.340  "params": {
00:18:57.340  "timeout_sec": 30
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "bdev_nvme_set_options",
00:18:57.340  "params": {
00:18:57.340  "action_on_timeout": "none",
00:18:57.340  "timeout_us": 0,
00:18:57.340  "timeout_admin_us": 0,
00:18:57.340  "keep_alive_timeout_ms": 10000,
00:18:57.340  "arbitration_burst": 0,
00:18:57.340  "low_priority_weight": 0,
00:18:57.340  "medium_priority_weight": 0,
00:18:57.340  "high_priority_weight": 0,
00:18:57.340  "nvme_adminq_poll_period_us": 10000,
00:18:57.340  "nvme_ioq_poll_period_us": 0,
00:18:57.340  "io_queue_requests": 0,
00:18:57.340  "delay_cmd_submit": true,
00:18:57.340  "transport_retry_count": 4,
00:18:57.340  "bdev_retry_count": 3,
00:18:57.340  "transport_ack_timeout": 0,
00:18:57.340  "ctrlr_loss_timeout_sec": 0,
00:18:57.340  "reconnect_delay_sec": 0,
00:18:57.340  "fast_io_fail_timeout_sec": 0,
00:18:57.340  "disable_auto_failback": false,
00:18:57.340  "generate_uuids": false,
00:18:57.340  "transport_tos": 0,
00:18:57.340  "nvme_error_stat": false,
00:18:57.340  "rdma_srq_size": 0,
00:18:57.340  "io_path_stat": false,
00:18:57.340  "allow_accel_sequence": false,
00:18:57.340  "rdma_max_cq_size": 0,
00:18:57.340  "rdma_cm_event_timeout_ms": 0,
00:18:57.340  "dhchap_digests": [
00:18:57.340  "sha256",
00:18:57.340  "sha384",
00:18:57.340  "sha512"
00:18:57.340  ],
00:18:57.340  "dhchap_dhgroups": [
00:18:57.340  "null",
00:18:57.340  "ffdhe2048",
00:18:57.340  "ffdhe3072",
00:18:57.340  "ffdhe4096",
00:18:57.340  "ffdhe6144",
00:18:57.340  "ffdhe8192"
00:18:57.340  ]
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "bdev_nvme_set_hotplug",
00:18:57.340  "params": {
00:18:57.340  "period_us": 100000,
00:18:57.340  "enable": false
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "bdev_malloc_create",
00:18:57.340  "params": {
00:18:57.340  "name": "malloc0",
00:18:57.340  "num_blocks": 8192,
00:18:57.340  "block_size": 4096,
00:18:57.340  "physical_block_size": 4096,
00:18:57.340  "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e",
00:18:57.340  "optimal_io_boundary": 0,
00:18:57.340  "md_size": 0,
00:18:57.340  "dif_type": 0,
00:18:57.340  "dif_is_head_of_md": false,
00:18:57.340  "dif_pi_format": 0
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "bdev_wait_for_examine"
00:18:57.340  }
00:18:57.340  ]
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "nbd",
00:18:57.340  "config": []
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "scheduler",
00:18:57.340  "config": [
00:18:57.340  {
00:18:57.340  "method": "framework_set_scheduler",
00:18:57.340  "params": {
00:18:57.340  "name": "static"
00:18:57.340  }
00:18:57.340  }
00:18:57.340  ]
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "subsystem": "nvmf",
00:18:57.340  "config": [
00:18:57.340  {
00:18:57.340  "method": "nvmf_set_config",
00:18:57.340  "params": {
00:18:57.340  "discovery_filter": "match_any",
00:18:57.340  "admin_cmd_passthru": {
00:18:57.340  "identify_ctrlr": false
00:18:57.340  },
00:18:57.340  "dhchap_digests": [
00:18:57.340  "sha256",
00:18:57.340  "sha384",
00:18:57.340  "sha512"
00:18:57.340  ],
00:18:57.340  "dhchap_dhgroups": [
00:18:57.340  "null",
00:18:57.340  "ffdhe2048",
00:18:57.340  "ffdhe3072",
00:18:57.340  "ffdhe4096",
00:18:57.340  "ffdhe6144",
00:18:57.340  "ffdhe8192"
00:18:57.340  ]
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "nvmf_set_max_subsystems",
00:18:57.340  "params": {
00:18:57.340  "max_subsystems": 1024
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "nvmf_set_crdt",
00:18:57.340  "params": {
00:18:57.340  "crdt1": 0,
00:18:57.340  "crdt2": 0,
00:18:57.340  "crdt3": 0
00:18:57.340  }
00:18:57.340  },
00:18:57.340  {
00:18:57.340  "method": "nvmf_create_transport",
00:18:57.340  "params": {
00:18:57.340  "trtype": "TCP",
00:18:57.340  "max_queue_depth": 128,
00:18:57.340  "max_io_qpairs_per_ctrlr": 127,
00:18:57.340  "in_capsule_data_size": 4096,
00:18:57.340  "max_io_size": 131072,
00:18:57.340  "io_unit_size": 131072,
00:18:57.340  "max_aq_depth": 128,
00:18:57.340  "num_shared_buffers": 511,
00:18:57.340  "buf_cache_size": 4294967295,
00:18:57.340  "dif_insert_or_strip": false,
00:18:57.340  "zcopy": false,
00:18:57.341  "c2h_success": false,
00:18:57.341  "sock_priority": 0,
00:18:57.341  "abort_timeout_sec": 1,
00:18:57.341  "ack_timeout": 0,
00:18:57.341  "data_wr_pool_size": 0
00:18:57.341  }
00:18:57.341  },
00:18:57.341  {
00:18:57.341  "method": "nvmf_create_subsystem",
00:18:57.341  "params": {
00:18:57.341  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:57.341  "allow_any_host": false,
00:18:57.341  "serial_number": "00000000000000000000",
00:18:57.341  "model_number": "SPDK bdev Controller",
00:18:57.341  "max_namespaces": 32,
00:18:57.341  "min_cntlid": 1,
00:18:57.341  "max_cntlid": 65519,
00:18:57.341  "ana_reporting": false
00:18:57.341  }
00:18:57.341  },
00:18:57.341  {
00:18:57.341  "method": "nvmf_subsystem_add_host",
00:18:57.341  "params": {
00:18:57.341  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:57.341  "host": "nqn.2016-06.io.spdk:host1",
00:18:57.341  "psk": "key0"
00:18:57.341  }
00:18:57.341  },
00:18:57.341  {
00:18:57.341  "method": "nvmf_subsystem_add_ns",
00:18:57.341  "params": {
00:18:57.341  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:57.341  "namespace": {
00:18:57.341  "nsid": 1,
00:18:57.341  "bdev_name": "malloc0",
00:18:57.341  "nguid": "228CF112B03A4AA4BA0071FD6183E66E",
00:18:57.341  "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e",
00:18:57.341  "no_auto_visible": false
00:18:57.341  }
00:18:57.341  }
00:18:57.341  },
00:18:57.341  {
00:18:57.341  "method": "nvmf_subsystem_add_listener",
00:18:57.341  "params": {
00:18:57.341  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:57.341  "listen_address": {
00:18:57.341  "trtype": "TCP",
00:18:57.341  "adrfam": "IPv4",
00:18:57.341  "traddr": "10.0.0.2",
00:18:57.341  "trsvcid": "4420"
00:18:57.341  },
00:18:57.341  "secure_channel": false,
00:18:57.341  "sock_impl": "ssl"
00:18:57.341  }
00:18:57.341  }
00:18:57.341  ]
00:18:57.341  }
00:18:57.341  ]
00:18:57.341  }'
00:18:57.341    04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config
00:18:57.906   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{
00:18:57.906    "subsystems": [
00:18:57.906      {
00:18:57.906        "subsystem": "keyring",
00:18:57.906        "config": [
00:18:57.906          {
00:18:57.906            "method": "keyring_file_add_key",
00:18:57.906            "params": {
00:18:57.906              "name": "key0",
00:18:57.906              "path": "/tmp/tmp.fbnd8laNn2"
00:18:57.906            }
00:18:57.906          }
00:18:57.906        ]
00:18:57.906      },
00:18:57.906      {
00:18:57.906        "subsystem": "iobuf",
00:18:57.906        "config": [
00:18:57.906          {
00:18:57.906            "method": "iobuf_set_options",
00:18:57.906            "params": {
00:18:57.906              "small_pool_count": 8192,
00:18:57.906              "large_pool_count": 1024,
00:18:57.906              "small_bufsize": 8192,
00:18:57.906              "large_bufsize": 135168,
00:18:57.906              "enable_numa": false
00:18:57.906            }
00:18:57.906          }
00:18:57.906        ]
00:18:57.906      },
00:18:57.906      {
00:18:57.906        "subsystem": "sock",
00:18:57.906        "config": [
00:18:57.906          {
00:18:57.906            "method": "sock_set_default_impl",
00:18:57.906            "params": {
00:18:57.906              "impl_name": "posix"
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "sock_impl_set_options",
00:18:57.906            "params": {
00:18:57.906              "impl_name": "ssl",
00:18:57.906              "recv_buf_size": 4096,
00:18:57.906              "send_buf_size": 4096,
00:18:57.906              "enable_recv_pipe": true,
00:18:57.906              "enable_quickack": false,
00:18:57.906              "enable_placement_id": 0,
00:18:57.906              "enable_zerocopy_send_server": true,
00:18:57.906              "enable_zerocopy_send_client": false,
00:18:57.906              "zerocopy_threshold": 0,
00:18:57.906              "tls_version": 0,
00:18:57.906              "enable_ktls": false
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "sock_impl_set_options",
00:18:57.906            "params": {
00:18:57.906              "impl_name": "posix",
00:18:57.906              "recv_buf_size": 2097152,
00:18:57.906              "send_buf_size": 2097152,
00:18:57.906              "enable_recv_pipe": true,
00:18:57.906              "enable_quickack": false,
00:18:57.906              "enable_placement_id": 0,
00:18:57.906              "enable_zerocopy_send_server": true,
00:18:57.906              "enable_zerocopy_send_client": false,
00:18:57.906              "zerocopy_threshold": 0,
00:18:57.906              "tls_version": 0,
00:18:57.906              "enable_ktls": false
00:18:57.906            }
00:18:57.906          }
00:18:57.906        ]
00:18:57.906      },
00:18:57.906      {
00:18:57.906        "subsystem": "vmd",
00:18:57.906        "config": []
00:18:57.906      },
00:18:57.906      {
00:18:57.906        "subsystem": "accel",
00:18:57.906        "config": [
00:18:57.906          {
00:18:57.906            "method": "accel_set_options",
00:18:57.906            "params": {
00:18:57.906              "small_cache_size": 128,
00:18:57.906              "large_cache_size": 16,
00:18:57.906              "task_count": 2048,
00:18:57.906              "sequence_count": 2048,
00:18:57.906              "buf_count": 2048
00:18:57.906            }
00:18:57.906          }
00:18:57.906        ]
00:18:57.906      },
00:18:57.906      {
00:18:57.906        "subsystem": "bdev",
00:18:57.906        "config": [
00:18:57.906          {
00:18:57.906            "method": "bdev_set_options",
00:18:57.906            "params": {
00:18:57.906              "bdev_io_pool_size": 65535,
00:18:57.906              "bdev_io_cache_size": 256,
00:18:57.906              "bdev_auto_examine": true,
00:18:57.906              "iobuf_small_cache_size": 128,
00:18:57.906              "iobuf_large_cache_size": 16
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "bdev_raid_set_options",
00:18:57.906            "params": {
00:18:57.906              "process_window_size_kb": 1024,
00:18:57.906              "process_max_bandwidth_mb_sec": 0
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "bdev_iscsi_set_options",
00:18:57.906            "params": {
00:18:57.906              "timeout_sec": 30
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "bdev_nvme_set_options",
00:18:57.906            "params": {
00:18:57.906              "action_on_timeout": "none",
00:18:57.906              "timeout_us": 0,
00:18:57.906              "timeout_admin_us": 0,
00:18:57.906              "keep_alive_timeout_ms": 10000,
00:18:57.906              "arbitration_burst": 0,
00:18:57.906              "low_priority_weight": 0,
00:18:57.906              "medium_priority_weight": 0,
00:18:57.906              "high_priority_weight": 0,
00:18:57.906              "nvme_adminq_poll_period_us": 10000,
00:18:57.906              "nvme_ioq_poll_period_us": 0,
00:18:57.906              "io_queue_requests": 512,
00:18:57.906              "delay_cmd_submit": true,
00:18:57.906              "transport_retry_count": 4,
00:18:57.906              "bdev_retry_count": 3,
00:18:57.906              "transport_ack_timeout": 0,
00:18:57.906              "ctrlr_loss_timeout_sec": 0,
00:18:57.906              "reconnect_delay_sec": 0,
00:18:57.906              "fast_io_fail_timeout_sec": 0,
00:18:57.906              "disable_auto_failback": false,
00:18:57.906              "generate_uuids": false,
00:18:57.906              "transport_tos": 0,
00:18:57.906              "nvme_error_stat": false,
00:18:57.906              "rdma_srq_size": 0,
00:18:57.906              "io_path_stat": false,
00:18:57.906              "allow_accel_sequence": false,
00:18:57.906              "rdma_max_cq_size": 0,
00:18:57.906              "rdma_cm_event_timeout_ms": 0,
00:18:57.906              "dhchap_digests": [
00:18:57.906                "sha256",
00:18:57.906                "sha384",
00:18:57.906                "sha512"
00:18:57.906              ],
00:18:57.906              "dhchap_dhgroups": [
00:18:57.906                "null",
00:18:57.906                "ffdhe2048",
00:18:57.906                "ffdhe3072",
00:18:57.906                "ffdhe4096",
00:18:57.906                "ffdhe6144",
00:18:57.906                "ffdhe8192"
00:18:57.906              ]
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "bdev_nvme_attach_controller",
00:18:57.906            "params": {
00:18:57.906              "name": "nvme0",
00:18:57.906              "trtype": "TCP",
00:18:57.906              "adrfam": "IPv4",
00:18:57.906              "traddr": "10.0.0.2",
00:18:57.906              "trsvcid": "4420",
00:18:57.906              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:57.906              "prchk_reftag": false,
00:18:57.906              "prchk_guard": false,
00:18:57.906              "ctrlr_loss_timeout_sec": 0,
00:18:57.906              "reconnect_delay_sec": 0,
00:18:57.906              "fast_io_fail_timeout_sec": 0,
00:18:57.906              "psk": "key0",
00:18:57.906              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:57.906              "hdgst": false,
00:18:57.906              "ddgst": false,
00:18:57.906              "multipath": "multipath"
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "bdev_nvme_set_hotplug",
00:18:57.906            "params": {
00:18:57.906              "period_us": 100000,
00:18:57.906              "enable": false
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "bdev_enable_histogram",
00:18:57.906            "params": {
00:18:57.906              "name": "nvme0n1",
00:18:57.906              "enable": true
00:18:57.906            }
00:18:57.906          },
00:18:57.906          {
00:18:57.906            "method": "bdev_wait_for_examine"
00:18:57.906          }
00:18:57.906        ]
00:18:57.906      },
00:18:57.906      {
00:18:57.906        "subsystem": "nbd",
00:18:57.906        "config": []
00:18:57.906      }
00:18:57.906    ]
00:18:57.906  }'
00:18:57.906   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 257241
00:18:57.906   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257241 ']'
00:18:57.906   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257241
00:18:57.906    04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:57.906   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:57.906    04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257241
00:18:57.906   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:57.906   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:57.907   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257241'
00:18:57.907  killing process with pid 257241
00:18:57.907   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257241
00:18:57.907  Received shutdown signal, test time was about 1.000000 seconds
00:18:57.907  
00:18:57.907                                                                                                  Latency(us)
00:18:57.907  
[2024-12-09T03:09:26.483Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:57.907  
[2024-12-09T03:09:26.483Z]  ===================================================================================================================
00:18:57.907  
[2024-12-09T03:09:26.483Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:18:57.907   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257241
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 257126
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257126 ']'
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257126
00:18:58.164    04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:58.164    04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257126
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257126'
00:18:58.164  killing process with pid 257126
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257126
00:18:58.164   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257126
00:18:58.421   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62
00:18:58.421   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:58.421    04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{
00:18:58.421  "subsystems": [
00:18:58.421  {
00:18:58.421  "subsystem": "keyring",
00:18:58.421  "config": [
00:18:58.421  {
00:18:58.421  "method": "keyring_file_add_key",
00:18:58.421  "params": {
00:18:58.421  "name": "key0",
00:18:58.421  "path": "/tmp/tmp.fbnd8laNn2"
00:18:58.421  }
00:18:58.421  }
00:18:58.421  ]
00:18:58.421  },
00:18:58.421  {
00:18:58.421  "subsystem": "iobuf",
00:18:58.421  "config": [
00:18:58.421  {
00:18:58.421  "method": "iobuf_set_options",
00:18:58.421  "params": {
00:18:58.421  "small_pool_count": 8192,
00:18:58.421  "large_pool_count": 1024,
00:18:58.421  "small_bufsize": 8192,
00:18:58.421  "large_bufsize": 135168,
00:18:58.421  "enable_numa": false
00:18:58.421  }
00:18:58.421  }
00:18:58.421  ]
00:18:58.421  },
00:18:58.421  {
00:18:58.421  "subsystem": "sock",
00:18:58.421  "config": [
00:18:58.421  {
00:18:58.421  "method": "sock_set_default_impl",
00:18:58.421  "params": {
00:18:58.421  "impl_name": "posix"
00:18:58.421  }
00:18:58.421  },
00:18:58.421  {
00:18:58.421  "method": "sock_impl_set_options",
00:18:58.421  "params": {
00:18:58.421  "impl_name": "ssl",
00:18:58.421  "recv_buf_size": 4096,
00:18:58.421  "send_buf_size": 4096,
00:18:58.421  "enable_recv_pipe": true,
00:18:58.421  "enable_quickack": false,
00:18:58.421  "enable_placement_id": 0,
00:18:58.421  "enable_zerocopy_send_server": true,
00:18:58.421  "enable_zerocopy_send_client": false,
00:18:58.421  "zerocopy_threshold": 0,
00:18:58.421  "tls_version": 0,
00:18:58.421  "enable_ktls": false
00:18:58.421  }
00:18:58.421  },
00:18:58.421  {
00:18:58.421  "method": "sock_impl_set_options",
00:18:58.421  "params": {
00:18:58.421  "impl_name": "posix",
00:18:58.421  "recv_buf_size": 2097152,
00:18:58.421  "send_buf_size": 2097152,
00:18:58.421  "enable_recv_pipe": true,
00:18:58.421  "enable_quickack": false,
00:18:58.421  "enable_placement_id": 0,
00:18:58.422  "enable_zerocopy_send_server": true,
00:18:58.422  "enable_zerocopy_send_client": false,
00:18:58.422  "zerocopy_threshold": 0,
00:18:58.422  "tls_version": 0,
00:18:58.422  "enable_ktls": false
00:18:58.422  }
00:18:58.422  }
00:18:58.422  ]
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "subsystem": "vmd",
00:18:58.422  "config": []
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "subsystem": "accel",
00:18:58.422  "config": [
00:18:58.422  {
00:18:58.422  "method": "accel_set_options",
00:18:58.422  "params": {
00:18:58.422  "small_cache_size": 128,
00:18:58.422  "large_cache_size": 16,
00:18:58.422  "task_count": 2048,
00:18:58.422  "sequence_count": 2048,
00:18:58.422  "buf_count": 2048
00:18:58.422  }
00:18:58.422  }
00:18:58.422  ]
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "subsystem": "bdev",
00:18:58.422  "config": [
00:18:58.422  {
00:18:58.422  "method": "bdev_set_options",
00:18:58.422  "params": {
00:18:58.422  "bdev_io_pool_size": 65535,
00:18:58.422  "bdev_io_cache_size": 256,
00:18:58.422  "bdev_auto_examine": true,
00:18:58.422  "iobuf_small_cache_size": 128,
00:18:58.422  "iobuf_large_cache_size": 16
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "bdev_raid_set_options",
00:18:58.422  "params": {
00:18:58.422  "process_window_size_kb": 1024,
00:18:58.422  "process_max_bandwidth_mb_sec": 0
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "bdev_iscsi_set_options",
00:18:58.422  "params": {
00:18:58.422  "timeout_sec": 30
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "bdev_nvme_set_options",
00:18:58.422  "params": {
00:18:58.422  "action_on_timeout": "none",
00:18:58.422  "timeout_us": 0,
00:18:58.422  "timeout_admin_us": 0,
00:18:58.422  "keep_alive_timeout_ms": 10000,
00:18:58.422  "arbitration_burst": 0,
00:18:58.422  "low_priority_weight": 0,
00:18:58.422  "medium_priority_weight": 0,
00:18:58.422  "high_priority_weight": 0,
00:18:58.422  "nvme_adminq_poll_period_us": 10000,
00:18:58.422  "nvme_ioq_poll_period_us": 0,
00:18:58.422  "io_queue_requests": 0,
00:18:58.422  "delay_cmd_submit": true,
00:18:58.422  "transport_retry_count": 4,
00:18:58.422  "bdev_retry_count": 3,
00:18:58.422  "transport_ack_timeout": 0,
00:18:58.422  "ctrlr_loss_timeout_sec": 0,
00:18:58.422  "reconnect_delay_sec": 0,
00:18:58.422  "fast_io_fail_timeout_sec": 0,
00:18:58.422  "disable_auto_failback": false,
00:18:58.422  "generate_uuids": false,
00:18:58.422  "transport_tos": 0,
00:18:58.422  "nvme_error_stat": false,
00:18:58.422  "rdma_srq_size": 0,
00:18:58.422  "io_path_stat": false,
00:18:58.422  "allow_accel_sequence": false,
00:18:58.422  "rdma_max_cq_size": 0,
00:18:58.422  "rdma_cm_event_timeout_ms": 0,
00:18:58.422  "dhchap_digests": [
00:18:58.422  "sha256",
00:18:58.422  "sha384",
00:18:58.422  "sha512"
00:18:58.422  ],
00:18:58.422  "dhchap_dhgroups": [
00:18:58.422  "null",
00:18:58.422  "ffdhe2048",
00:18:58.422  "ffdhe3072",
00:18:58.422  "ffdhe4096",
00:18:58.422  "ffdhe6144",
00:18:58.422  "ffdhe8192"
00:18:58.422  ]
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "bdev_nvme_set_hotplug",
00:18:58.422  "params": {
00:18:58.422  "period_us": 100000,
00:18:58.422  "enable": false
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "bdev_malloc_create",
00:18:58.422  "params": {
00:18:58.422  "name": "malloc0",
00:18:58.422  "num_blocks": 8192,
00:18:58.422  "block_size": 4096,
00:18:58.422  "physical_block_size": 4096,
00:18:58.422  "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e",
00:18:58.422  "optimal_io_boundary": 0,
00:18:58.422  "md_size": 0,
00:18:58.422  "dif_type": 0,
00:18:58.422  "dif_is_head_of_md": false,
00:18:58.422  "dif_pi_format": 0
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "bdev_wait_for_examine"
00:18:58.422  }
00:18:58.422  ]
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "subsystem": "nbd",
00:18:58.422  "config": []
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "subsystem": "scheduler",
00:18:58.422  "config": [
00:18:58.422  {
00:18:58.422  "method": "framework_set_scheduler",
00:18:58.422  "params": {
00:18:58.422  "name": "static"
00:18:58.422  }
00:18:58.422  }
00:18:58.422  ]
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "subsystem": "nvmf",
00:18:58.422  "config": [
00:18:58.422  {
00:18:58.422  "method": "nvmf_set_config",
00:18:58.422  "params": {
00:18:58.422  "discovery_filter": "match_any",
00:18:58.422  "admin_cmd_passthru": {
00:18:58.422  "identify_ctrlr": false
00:18:58.422  },
00:18:58.422  "dhchap_digests": [
00:18:58.422  "sha256",
00:18:58.422  "sha384",
00:18:58.422  "sha512"
00:18:58.422  ],
00:18:58.422  "dhchap_dhgroups": [
00:18:58.422  "null",
00:18:58.422  "ffdhe2048",
00:18:58.422  "ffdhe3072",
00:18:58.422  "ffdhe4096",
00:18:58.422  "ffdhe6144",
00:18:58.422  "ffdhe8192"
00:18:58.422  ]
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "nvmf_set_max_subsystems",
00:18:58.422  "params": {
00:18:58.422  "max_subsystems": 1024
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "nvmf_set_crdt",
00:18:58.422  "params": {
00:18:58.422  "crdt1": 0,
00:18:58.422  "crdt2": 0,
00:18:58.422  "crdt3": 0
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "nvmf_create_transport",
00:18:58.422  "params": {
00:18:58.422  "trtype": "TCP",
00:18:58.422  "max_queue_depth": 128,
00:18:58.422  "max_io_qpairs_per_ctrlr": 127,
00:18:58.422  "in_capsule_data_size": 4096,
00:18:58.422  "max_io_size": 131072,
00:18:58.422  "io_unit_size": 131072,
00:18:58.422  "max_aq_depth": 128,
00:18:58.422  "num_shared_buffers": 511,
00:18:58.422  "buf_cache_size": 4294967295,
00:18:58.422  "dif_insert_or_strip": false,
00:18:58.422  "zcopy": false,
00:18:58.422  "c2h_success": false,
00:18:58.422  "sock_priority": 0,
00:18:58.422  "abort_timeout_sec": 1,
00:18:58.422  "ack_timeout": 0,
00:18:58.422  "data_wr_pool_size": 0
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "nvmf_create_subsystem",
00:18:58.422  "params": {
00:18:58.422  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:58.422  "allow_any_host": false,
00:18:58.422  "serial_number": "00000000000000000000",
00:18:58.422  "model_number": "SPDK bdev Controller",
00:18:58.422  "max_namespaces": 32,
00:18:58.422  "min_cntlid": 1,
00:18:58.422  "max_cntlid": 65519,
00:18:58.422  "ana_reporting": false
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "nvmf_subsystem_add_host",
00:18:58.422  "params": {
00:18:58.422  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:58.422  "host": "nqn.2016-06.io.spdk:host1",
00:18:58.422  "psk": "key0"
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "nvmf_subsystem_add_ns",
00:18:58.422  "params": {
00:18:58.422  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:58.422  "namespace": {
00:18:58.422  "nsid": 1,
00:18:58.422  "bdev_name": "malloc0",
00:18:58.422  "nguid": "228CF112B03A4AA4BA0071FD6183E66E",
00:18:58.422  "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e",
00:18:58.422  "no_auto_visible": false
00:18:58.422  }
00:18:58.422  }
00:18:58.422  },
00:18:58.422  {
00:18:58.422  "method": "nvmf_subsystem_add_listener",
00:18:58.422  "params": {
00:18:58.422  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:58.422  "listen_address": {
00:18:58.422  "trtype": "TCP",
00:18:58.422  "adrfam": "IPv4",
00:18:58.422  "traddr": "10.0.0.2",
00:18:58.422  "trsvcid": "4420"
00:18:58.422  },
00:18:58.422  "secure_channel": false,
00:18:58.422  "sock_impl": "ssl"
00:18:58.422  }
00:18:58.422  }
00:18:58.422  ]
00:18:58.422  }
00:18:58.422  ]
00:18:58.422  }'
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=257562
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 257562
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257562 ']'
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:58.422  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:58.422   04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:58.422  [2024-12-09 04:09:26.840708] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:58.423  [2024-12-09 04:09:26.840790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:58.423  [2024-12-09 04:09:26.909885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:58.423  [2024-12-09 04:09:26.962516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:58.423  [2024-12-09 04:09:26.962583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:58.423  [2024-12-09 04:09:26.962596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:58.423  [2024-12-09 04:09:26.962607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:58.423  [2024-12-09 04:09:26.962625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:58.423  [2024-12-09 04:09:26.963224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:58.680  [2024-12-09 04:09:27.204812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:58.680  [2024-12-09 04:09:27.236846] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:58.680  [2024-12-09 04:09:27.237073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=257706
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 257706 /var/tmp/bdevperf.sock
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257706 ']'
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:58.938  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:58.938   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:58.938    04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{
00:18:58.938    "subsystems": [
00:18:58.938      {
00:18:58.938        "subsystem": "keyring",
00:18:58.938        "config": [
00:18:58.938          {
00:18:58.938            "method": "keyring_file_add_key",
00:18:58.938            "params": {
00:18:58.938              "name": "key0",
00:18:58.938              "path": "/tmp/tmp.fbnd8laNn2"
00:18:58.938            }
00:18:58.938          }
00:18:58.938        ]
00:18:58.938      },
00:18:58.938      {
00:18:58.938        "subsystem": "iobuf",
00:18:58.938        "config": [
00:18:58.938          {
00:18:58.938            "method": "iobuf_set_options",
00:18:58.938            "params": {
00:18:58.938              "small_pool_count": 8192,
00:18:58.938              "large_pool_count": 1024,
00:18:58.938              "small_bufsize": 8192,
00:18:58.938              "large_bufsize": 135168,
00:18:58.938              "enable_numa": false
00:18:58.938            }
00:18:58.938          }
00:18:58.938        ]
00:18:58.938      },
00:18:58.938      {
00:18:58.938        "subsystem": "sock",
00:18:58.938        "config": [
00:18:58.938          {
00:18:58.938            "method": "sock_set_default_impl",
00:18:58.938            "params": {
00:18:58.938              "impl_name": "posix"
00:18:58.938            }
00:18:58.938          },
00:18:58.938          {
00:18:58.938            "method": "sock_impl_set_options",
00:18:58.938            "params": {
00:18:58.938              "impl_name": "ssl",
00:18:58.938              "recv_buf_size": 4096,
00:18:58.938              "send_buf_size": 4096,
00:18:58.938              "enable_recv_pipe": true,
00:18:58.938              "enable_quickack": false,
00:18:58.938              "enable_placement_id": 0,
00:18:58.938              "enable_zerocopy_send_server": true,
00:18:58.938              "enable_zerocopy_send_client": false,
00:18:58.938              "zerocopy_threshold": 0,
00:18:58.938              "tls_version": 0,
00:18:58.938              "enable_ktls": false
00:18:58.938            }
00:18:58.938          },
00:18:58.938          {
00:18:58.939            "method": "sock_impl_set_options",
00:18:58.939            "params": {
00:18:58.939              "impl_name": "posix",
00:18:58.939              "recv_buf_size": 2097152,
00:18:58.939              "send_buf_size": 2097152,
00:18:58.939              "enable_recv_pipe": true,
00:18:58.939              "enable_quickack": false,
00:18:58.939              "enable_placement_id": 0,
00:18:58.939              "enable_zerocopy_send_server": true,
00:18:58.939              "enable_zerocopy_send_client": false,
00:18:58.939              "zerocopy_threshold": 0,
00:18:58.939              "tls_version": 0,
00:18:58.939              "enable_ktls": false
00:18:58.939            }
00:18:58.939          }
00:18:58.939        ]
00:18:58.939      },
00:18:58.939      {
00:18:58.939        "subsystem": "vmd",
00:18:58.939        "config": []
00:18:58.939      },
00:18:58.939      {
00:18:58.939        "subsystem": "accel",
00:18:58.939        "config": [
00:18:58.939          {
00:18:58.939            "method": "accel_set_options",
00:18:58.939            "params": {
00:18:58.939              "small_cache_size": 128,
00:18:58.939              "large_cache_size": 16,
00:18:58.939              "task_count": 2048,
00:18:58.939              "sequence_count": 2048,
00:18:58.939              "buf_count": 2048
00:18:58.939            }
00:18:58.939          }
00:18:58.939        ]
00:18:58.939      },
00:18:58.939      {
00:18:58.939        "subsystem": "bdev",
00:18:58.939        "config": [
00:18:58.939          {
00:18:58.939            "method": "bdev_set_options",
00:18:58.939            "params": {
00:18:58.939              "bdev_io_pool_size": 65535,
00:18:58.939              "bdev_io_cache_size": 256,
00:18:58.939              "bdev_auto_examine": true,
00:18:58.939              "iobuf_small_cache_size": 128,
00:18:58.939              "iobuf_large_cache_size": 16
00:18:58.939            }
00:18:58.939          },
00:18:58.939          {
00:18:58.939            "method": "bdev_raid_set_options",
00:18:58.939            "params": {
00:18:58.939              "process_window_size_kb": 1024,
00:18:58.939              "process_max_bandwidth_mb_sec": 0
00:18:58.939            }
00:18:58.939          },
00:18:58.939          {
00:18:58.939            "method": "bdev_iscsi_set_options",
00:18:58.939            "params": {
00:18:58.939              "timeout_sec": 30
00:18:58.939            }
00:18:58.939          },
00:18:58.939          {
00:18:58.939            "method": "bdev_nvme_set_options",
00:18:58.939            "params": {
00:18:58.939              "action_on_timeout": "none",
00:18:58.939              "timeout_us": 0,
00:18:58.939              "timeout_admin_us": 0,
00:18:58.939              "keep_alive_timeout_ms": 10000,
00:18:58.939              "arbitration_burst": 0,
00:18:58.939              "low_priority_weight": 0,
00:18:58.939              "medium_priority_weight": 0,
00:18:58.939              "high_priority_weight": 0,
00:18:58.939              "nvme_adminq_poll_period_us": 10000,
00:18:58.939              "nvme_ioq_poll_period_us": 0,
00:18:58.939              "io_queue_requests": 512,
00:18:58.939              "delay_cmd_submit": true,
00:18:58.939              "transport_retry_count": 4,
00:18:58.939              "bdev_retry_count": 3,
00:18:58.939              "transport_ack_timeout": 0,
00:18:58.939              "ctrlr_loss_timeout_sec": 0,
00:18:58.939              "reconnect_delay_sec": 0,
00:18:58.939              "fast_io_fail_timeout_sec": 0,
00:18:58.939              "disable_auto_failback": false,
00:18:58.939              "generate_uuids": false,
00:18:58.939              "transport_tos": 0,
00:18:58.939              "nvme_error_stat": false,
00:18:58.939              "rdma_srq_size": 0,
00:18:58.939              "io_path_stat": false,
00:18:58.939              "allow_accel_sequence": false,
00:18:58.939              "rdma_max_cq_size": 0,
00:18:58.939              "rdma_cm_event_timeout_ms": 0,
00:18:58.939              "dhchap_digests": [
00:18:58.939                "sha256",
00:18:58.939                "sha384",
00:18:58.939                "sha512"
00:18:58.939              ],
00:18:58.939              "dhchap_dhgroups": [
00:18:58.939                "null",
00:18:58.939                "ffdhe2048",
00:18:58.939                "ffdhe3072",
00:18:58.939                "ffdhe4096",
00:18:58.939                "ffdhe6144",
00:18:58.939                "ffdhe8192"
00:18:58.939              ]
00:18:58.939            }
00:18:58.939          },
00:18:58.939          {
00:18:58.939            "method": "bdev_nvme_attach_controller",
00:18:58.939            "params": {
00:18:58.939              "name": "nvme0",
00:18:58.939              "trtype": "TCP",
00:18:58.939              "adrfam": "IPv4",
00:18:58.939              "traddr": "10.0.0.2",
00:18:58.939              "trsvcid": "4420",
00:18:58.939              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:18:58.939              "prchk_reftag": false,
00:18:58.939              "prchk_guard": false,
00:18:58.939              "ctrlr_loss_timeout_sec": 0,
00:18:58.939              "reconnect_delay_sec": 0,
00:18:58.939              "fast_io_fail_timeout_sec": 0,
00:18:58.939              "psk": "key0",
00:18:58.939              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:18:58.939              "hdgst": false,
00:18:58.939              "ddgst": false,
00:18:58.939              "multipath": "multipath"
00:18:58.939            }
00:18:58.939          },
00:18:58.939          {
00:18:58.939            "method": "bdev_nvme_set_hotplug",
00:18:58.939            "params": {
00:18:58.939              "period_us": 100000,
00:18:58.939              "enable": false
00:18:58.939            }
00:18:58.939          },
00:18:58.939          {
00:18:58.939            "method": "bdev_enable_histogram",
00:18:58.939            "params": {
00:18:58.939              "name": "nvme0n1",
00:18:58.939              "enable": true
00:18:58.939            }
00:18:58.939          },
00:18:58.939          {
00:18:58.939            "method": "bdev_wait_for_examine"
00:18:58.939          }
00:18:58.939        ]
00:18:58.939      },
00:18:58.939      {
00:18:58.939        "subsystem": "nbd",
00:18:58.939        "config": []
00:18:58.939      }
00:18:58.939    ]
00:18:58.939  }'
00:18:58.939   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:18:58.939  [2024-12-09 04:09:27.328238] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:18:58.939  [2024-12-09 04:09:27.328343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257706 ]
00:18:58.939  [2024-12-09 04:09:27.394535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:58.939  [2024-12-09 04:09:27.452893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:59.198  [2024-12-09 04:09:27.630100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:59.198   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:59.198   04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:18:59.198    04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:18:59.198    04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name'
00:18:59.456   04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:59.456   04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:18:59.714  Running I/O for 1 seconds...
00:19:00.645       3247.00 IOPS,    12.68 MiB/s
00:19:00.645                                                                                                  Latency(us)
00:19:00.645  
[2024-12-09T03:09:29.221Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:00.645  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:00.645  	 Verification LBA range: start 0x0 length 0x2000
00:19:00.645  	 nvme0n1             :       1.02    3305.60      12.91       0.00     0.00   38352.73    6553.60   80390.83
00:19:00.645  
[2024-12-09T03:09:29.221Z]  ===================================================================================================================
00:19:00.645  
[2024-12-09T03:09:29.221Z]  Total                       :               3305.60      12.91       0.00     0.00   38352.73    6553.60   80390.83
00:19:00.645  {
00:19:00.645    "results": [
00:19:00.645      {
00:19:00.645        "job": "nvme0n1",
00:19:00.645        "core_mask": "0x2",
00:19:00.645        "workload": "verify",
00:19:00.645        "status": "finished",
00:19:00.645        "verify_range": {
00:19:00.645          "start": 0,
00:19:00.645          "length": 8192
00:19:00.645        },
00:19:00.645        "queue_depth": 128,
00:19:00.645        "io_size": 4096,
00:19:00.645        "runtime": 1.020996,
00:19:00.645        "iops": 3305.595712421988,
00:19:00.645        "mibps": 12.912483251648391,
00:19:00.645        "io_failed": 0,
00:19:00.645        "io_timeout": 0,
00:19:00.645        "avg_latency_us": 38352.731598134436,
00:19:00.645        "min_latency_us": 6553.6,
00:19:00.645        "max_latency_us": 80390.82666666666
00:19:00.645      }
00:19:00.645    ],
00:19:00.645    "core_count": 1
00:19:00.645  }
00:19:00.645   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT
00:19:00.645   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:19:00.646    04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files
00:19:00.646   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:19:00.646  nvmf_trace.0
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 257706
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257706 ']'
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257706
00:19:00.903    04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:00.903    04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257706
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257706'
00:19:00.903  killing process with pid 257706
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257706
00:19:00.903  Received shutdown signal, test time was about 1.000000 seconds
00:19:00.903  
00:19:00.903                                                                                                  Latency(us)
00:19:00.903  
[2024-12-09T03:09:29.479Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:00.903  
[2024-12-09T03:09:29.479Z]  ===================================================================================================================
00:19:00.903  
[2024-12-09T03:09:29.479Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:19:00.903   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257706
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:19:01.160  rmmod nvme_tcp
00:19:01.160  rmmod nvme_fabrics
00:19:01.160  rmmod nvme_keyring
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 257562 ']'
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 257562
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257562 ']'
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257562
00:19:01.160    04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:01.160    04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257562
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257562'
00:19:01.160  killing process with pid 257562
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257562
00:19:01.160   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257562
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore
00:19:01.419   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:19:01.420   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns
00:19:01.420   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:01.420   04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:01.420    04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:03.321   04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:19:03.321   04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ApVRvtgogO /tmp/tmp.lhfP9KGslT /tmp/tmp.fbnd8laNn2
00:19:03.321  
00:19:03.321  real	1m22.393s
00:19:03.321  user	2m19.648s
00:19:03.321  sys	0m23.808s
00:19:03.321   04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:03.321   04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:19:03.321  ************************************
00:19:03.321  END TEST nvmf_tls
00:19:03.321  ************************************
00:19:03.579   04:09:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:19:03.580   04:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:03.580   04:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:03.580   04:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:03.580  ************************************
00:19:03.580  START TEST nvmf_fips
00:19:03.580  ************************************
00:19:03.580   04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:19:03.580  * Looking for test storage...
00:19:03.580  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips
00:19:03.580    04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:03.580     04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version
00:19:03.580     04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-:
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-:
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<'
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:03.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:03.580  		--rc genhtml_branch_coverage=1
00:19:03.580  		--rc genhtml_function_coverage=1
00:19:03.580  		--rc genhtml_legend=1
00:19:03.580  		--rc geninfo_all_blocks=1
00:19:03.580  		--rc geninfo_unexecuted_blocks=1
00:19:03.580  		
00:19:03.580  		'
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:03.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:03.580  		--rc genhtml_branch_coverage=1
00:19:03.580  		--rc genhtml_function_coverage=1
00:19:03.580  		--rc genhtml_legend=1
00:19:03.580  		--rc geninfo_all_blocks=1
00:19:03.580  		--rc geninfo_unexecuted_blocks=1
00:19:03.580  		
00:19:03.580  		'
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:03.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:03.580  		--rc genhtml_branch_coverage=1
00:19:03.580  		--rc genhtml_function_coverage=1
00:19:03.580  		--rc genhtml_legend=1
00:19:03.580  		--rc geninfo_all_blocks=1
00:19:03.580  		--rc geninfo_unexecuted_blocks=1
00:19:03.580  		
00:19:03.580  		'
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:03.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:03.580  		--rc genhtml_branch_coverage=1
00:19:03.580  		--rc genhtml_function_coverage=1
00:19:03.580  		--rc genhtml_legend=1
00:19:03.580  		--rc geninfo_all_blocks=1
00:19:03.580  		--rc geninfo_unexecuted_blocks=1
00:19:03.580  		
00:19:03.580  		'
00:19:03.580   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:03.580    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:03.580     04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:03.580      04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:03.580      04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:03.580      04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:03.580      04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH
00:19:03.581      04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:03.581  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}'
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-:
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-:
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>='
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]]
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]]
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ ))
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]]
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]]
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode'
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]]
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]]
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat -
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf
00:19:03.581   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers
00:19:03.581    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 ))
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[     name: openssl base provider != *base* ]]
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[     name: red hat enterprise linux 9 - openssl fips provider != *fips* ]]
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62
00:19:03.839    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # :
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:03.839    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:03.839    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]]
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62
00:19:03.839  Error setting digest
00:19:03.839  400230F4557F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties ()
00:19:03.839  400230F4557F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272:
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:03.839    04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable
00:19:03.839   04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=()
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=()
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=()
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=()
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=()
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:19:06.368  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:19:06.368  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:06.368   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:19:06.369  Found net devices under 0000:0a:00.0: cvl_0_0
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:19:06.369  Found net devices under 0000:0a:00.1: cvl_0_1
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:19:06.369  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:19:06.369  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms
00:19:06.369  
00:19:06.369  --- 10.0.0.2 ping statistics ---
00:19:06.369  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:06.369  rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:19:06.369  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:19:06.369  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms
00:19:06.369  
00:19:06.369  --- 10.0.0.1 ping statistics ---
00:19:06.369  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:06.369  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=259943
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 259943
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 259943 ']'
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:06.369  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:19:06.369  [2024-12-09 04:09:34.605577] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:19:06.369  [2024-12-09 04:09:34.605682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:06.369  [2024-12-09 04:09:34.676158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:06.369  [2024-12-09 04:09:34.731768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:06.369  [2024-12-09 04:09:34.731828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:06.369  [2024-12-09 04:09:34.731852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:06.369  [2024-12-09 04:09:34.731870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:06.369  [2024-12-09 04:09:34.731880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:06.369  [2024-12-09 04:09:34.732430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:19:06.369    04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.hqk
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.hqk
00:19:06.369   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.hqk
00:19:06.370   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.hqk
00:19:06.370   04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:19:06.627  [2024-12-09 04:09:35.168728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:19:06.627  [2024-12-09 04:09:35.184760] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:19:06.627  [2024-12-09 04:09:35.185014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:19:06.889  malloc0
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=260093
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 260093 /var/tmp/bdevperf.sock
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 260093 ']'
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:19:06.889  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:06.889   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:19:06.889  [2024-12-09 04:09:35.322660] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:19:06.889  [2024-12-09 04:09:35.322744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260093 ]
00:19:06.889  [2024-12-09 04:09:35.405939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:07.151  [2024-12-09 04:09:35.478967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:19:07.151   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:07.151   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0
00:19:07.151   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.hqk
00:19:07.408   04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:19:07.664  [2024-12-09 04:09:36.159920] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:19:07.664  TLSTESTn1
00:19:07.921   04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:19:07.921  Running I/O for 10 seconds...
00:19:10.223       3296.00 IOPS,    12.88 MiB/s
[2024-12-09T03:09:39.727Z]      3309.00 IOPS,    12.93 MiB/s
[2024-12-09T03:09:40.656Z]      3312.33 IOPS,    12.94 MiB/s
[2024-12-09T03:09:41.586Z]      3312.25 IOPS,    12.94 MiB/s
[2024-12-09T03:09:42.519Z]      3314.60 IOPS,    12.95 MiB/s
[2024-12-09T03:09:43.451Z]      3329.50 IOPS,    13.01 MiB/s
[2024-12-09T03:09:44.383Z]      3319.29 IOPS,    12.97 MiB/s
[2024-12-09T03:09:45.760Z]      3312.75 IOPS,    12.94 MiB/s
[2024-12-09T03:09:46.692Z]      3324.78 IOPS,    12.99 MiB/s
[2024-12-09T03:09:46.692Z]      3321.60 IOPS,    12.97 MiB/s
00:19:18.116                                                                                                  Latency(us)
00:19:18.116  
[2024-12-09T03:09:46.692Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:18.116  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:19:18.116  	 Verification LBA range: start 0x0 length 0x2000
00:19:18.116  	 TLSTESTn1           :      10.03    3324.23      12.99       0.00     0.00   38429.65   10485.76   33981.63
00:19:18.116  
[2024-12-09T03:09:46.692Z]  ===================================================================================================================
00:19:18.116  
[2024-12-09T03:09:46.692Z]  Total                       :               3324.23      12.99       0.00     0.00   38429.65   10485.76   33981.63
00:19:18.116  {
00:19:18.116    "results": [
00:19:18.116      {
00:19:18.116        "job": "TLSTESTn1",
00:19:18.116        "core_mask": "0x4",
00:19:18.116        "workload": "verify",
00:19:18.116        "status": "finished",
00:19:18.116        "verify_range": {
00:19:18.116          "start": 0,
00:19:18.116          "length": 8192
00:19:18.116        },
00:19:18.116        "queue_depth": 128,
00:19:18.116        "io_size": 4096,
00:19:18.116        "runtime": 10.029991,
00:19:18.116        "iops": 3324.230300904557,
00:19:18.116        "mibps": 12.985274612908427,
00:19:18.116        "io_failed": 0,
00:19:18.116        "io_timeout": 0,
00:19:18.116        "avg_latency_us": 38429.65247311255,
00:19:18.116        "min_latency_us": 10485.76,
00:19:18.116        "max_latency_us": 33981.62962962963
00:19:18.116      }
00:19:18.116    ],
00:19:18.116    "core_count": 1
00:19:18.116  }
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:19:18.116    04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:19:18.116   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:19:18.117  nvmf_trace.0
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 260093
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 260093 ']'
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 260093
00:19:18.117    04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:18.117    04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260093
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260093'
00:19:18.117  killing process with pid 260093
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 260093
00:19:18.117  Received shutdown signal, test time was about 10.000000 seconds
00:19:18.117  
00:19:18.117                                                                                                  Latency(us)
00:19:18.117  
[2024-12-09T03:09:46.693Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:18.117  
[2024-12-09T03:09:46.693Z]  ===================================================================================================================
00:19:18.117  
[2024-12-09T03:09:46.693Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:19:18.117   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 260093
00:19:18.374   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini
00:19:18.374   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:18.374   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:19:18.375  rmmod nvme_tcp
00:19:18.375  rmmod nvme_fabrics
00:19:18.375  rmmod nvme_keyring
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 259943 ']'
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 259943
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 259943 ']'
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 259943
00:19:18.375    04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:18.375    04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259943
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259943'
00:19:18.375  killing process with pid 259943
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 259943
00:19:18.375   04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 259943
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:18.632   04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:18.632    04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.hqk
00:19:21.164  
00:19:21.164  real	0m17.228s
00:19:21.164  user	0m23.332s
00:19:21.164  sys	0m5.102s
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:19:21.164  ************************************
00:19:21.164  END TEST nvmf_fips
00:19:21.164  ************************************
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:21.164  ************************************
00:19:21.164  START TEST nvmf_control_msg_list
00:19:21.164  ************************************
00:19:21.164   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp
00:19:21.164  * Looking for test storage...
00:19:21.164  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:19:21.164    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:21.164     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version
00:19:21.164     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:21.164    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:21.164    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:21.164    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:21.164    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:21.164    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-:
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-:
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:21.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.165  		--rc genhtml_branch_coverage=1
00:19:21.165  		--rc genhtml_function_coverage=1
00:19:21.165  		--rc genhtml_legend=1
00:19:21.165  		--rc geninfo_all_blocks=1
00:19:21.165  		--rc geninfo_unexecuted_blocks=1
00:19:21.165  		
00:19:21.165  		'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:21.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.165  		--rc genhtml_branch_coverage=1
00:19:21.165  		--rc genhtml_function_coverage=1
00:19:21.165  		--rc genhtml_legend=1
00:19:21.165  		--rc geninfo_all_blocks=1
00:19:21.165  		--rc geninfo_unexecuted_blocks=1
00:19:21.165  		
00:19:21.165  		'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:21.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.165  		--rc genhtml_branch_coverage=1
00:19:21.165  		--rc genhtml_function_coverage=1
00:19:21.165  		--rc genhtml_legend=1
00:19:21.165  		--rc geninfo_all_blocks=1
00:19:21.165  		--rc geninfo_unexecuted_blocks=1
00:19:21.165  		
00:19:21.165  		'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:21.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.165  		--rc genhtml_branch_coverage=1
00:19:21.165  		--rc genhtml_function_coverage=1
00:19:21.165  		--rc genhtml_legend=1
00:19:21.165  		--rc geninfo_all_blocks=1
00:19:21.165  		--rc geninfo_unexecuted_blocks=1
00:19:21.165  		
00:19:21.165  		'
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:21.165     04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:21.165      04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:21.165      04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:21.165      04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:21.165      04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH
00:19:21.165      04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:21.165  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:21.165    04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable
00:19:21.165   04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=()
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=()
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=()
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=()
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=()
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:19:23.280  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:19:23.280  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:19:23.280  Found net devices under 0000:0a:00.0: cvl_0_0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:19:23.280  Found net devices under 0000:0a:00.1: cvl_0_1
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:19:23.280  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:19:23.280  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms
00:19:23.280  
00:19:23.280  --- 10.0.0.2 ping statistics ---
00:19:23.280  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:23.280  rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:19:23.280  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:19:23.280  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms
00:19:23.280  
00:19:23.280  --- 10.0.0.1 ping statistics ---
00:19:23.280  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:23.280  rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=263365
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 263365
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 263365 ']'
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:23.280  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.280  [2024-12-09 04:09:51.552990] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:19:23.280  [2024-12-09 04:09:51.553088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:23.280  [2024-12-09 04:09:51.623821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:23.280  [2024-12-09 04:09:51.676341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:23.280  [2024-12-09 04:09:51.676416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:23.280  [2024-12-09 04:09:51.676438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:23.280  [2024-12-09 04:09:51.676448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:23.280  [2024-12-09 04:09:51.676458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:23.280  [2024-12-09 04:09:51.677039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.280  [2024-12-09 04:09:51.816471] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a
00:19:23.280   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:23.281   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.281   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:23.281   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512
00:19:23.281   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:23.281   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.591  Malloc0
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:23.591  [2024-12-09 04:09:51.856982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=263387
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=263388
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=263389
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 263387
00:19:23.591   04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:19:23.591  [2024-12-09 04:09:51.925464] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:19:23.591  [2024-12-09 04:09:51.935508] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:19:23.591  [2024-12-09 04:09:51.935718] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:19:24.718  Initializing NVMe Controllers
00:19:24.718  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:19:24.718  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1
00:19:24.718  Initialization complete. Launching workers.
00:19:24.718  ========================================================
00:19:24.718                                                                                                               Latency(us)
00:19:24.718  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:19:24.718  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  1:      25.00       0.10   40885.33   40570.48   40964.76
00:19:24.718  ========================================================
00:19:24.718  Total                                                                    :      25.00       0.10   40885.33   40570.48   40964.76
00:19:24.718  
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 263388
00:19:24.718  Initializing NVMe Controllers
00:19:24.718  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:19:24.718  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3
00:19:24.718  Initialization complete. Launching workers.
00:19:24.718  ========================================================
00:19:24.718                                                                                                               Latency(us)
00:19:24.718  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:19:24.718  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  3:    6306.99      24.64     158.12     150.93     523.82
00:19:24.718  ========================================================
00:19:24.718  Total                                                                    :    6306.99      24.64     158.12     150.93     523.82
00:19:24.718  
00:19:24.718  Initializing NVMe Controllers
00:19:24.718  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:19:24.718  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2
00:19:24.718  Initialization complete. Launching workers.
00:19:24.718  ========================================================
00:19:24.718                                                                                                               Latency(us)
00:19:24.718  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:19:24.718  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  2:      25.00       0.10   40901.39   40836.96   40966.92
00:19:24.718  ========================================================
00:19:24.718  Total                                                                    :      25.00       0.10   40901.39   40836.96   40966.92
00:19:24.718  
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 263389
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:24.718   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:19:24.718  rmmod nvme_tcp
00:19:24.976  rmmod nvme_fabrics
00:19:24.976  rmmod nvme_keyring
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 263365 ']'
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 263365
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 263365 ']'
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 263365
00:19:24.976    04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:24.976    04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263365
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263365'
00:19:24.976  killing process with pid 263365
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 263365
00:19:24.976   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 263365
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:25.235   04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:25.235    04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:27.135   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:19:27.135  
00:19:27.135  real	0m6.471s
00:19:27.135  user	0m6.234s
00:19:27.135  sys	0m2.521s
00:19:27.135   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:27.135   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:19:27.135  ************************************
00:19:27.135  END TEST nvmf_control_msg_list
00:19:27.135  ************************************
00:19:27.135   04:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp
00:19:27.135   04:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:27.135   04:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:27.135   04:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:27.393  ************************************
00:19:27.393  START TEST nvmf_wait_for_buf
00:19:27.393  ************************************
00:19:27.393   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp
00:19:27.393  * Looking for test storage...
00:19:27.393  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:27.393     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version
00:19:27.393     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-:
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-:
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<'
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1
00:19:27.393    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:27.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:27.394  		--rc genhtml_branch_coverage=1
00:19:27.394  		--rc genhtml_function_coverage=1
00:19:27.394  		--rc genhtml_legend=1
00:19:27.394  		--rc geninfo_all_blocks=1
00:19:27.394  		--rc geninfo_unexecuted_blocks=1
00:19:27.394  		
00:19:27.394  		'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:27.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:27.394  		--rc genhtml_branch_coverage=1
00:19:27.394  		--rc genhtml_function_coverage=1
00:19:27.394  		--rc genhtml_legend=1
00:19:27.394  		--rc geninfo_all_blocks=1
00:19:27.394  		--rc geninfo_unexecuted_blocks=1
00:19:27.394  		
00:19:27.394  		'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:27.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:27.394  		--rc genhtml_branch_coverage=1
00:19:27.394  		--rc genhtml_function_coverage=1
00:19:27.394  		--rc genhtml_legend=1
00:19:27.394  		--rc geninfo_all_blocks=1
00:19:27.394  		--rc geninfo_unexecuted_blocks=1
00:19:27.394  		
00:19:27.394  		'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:27.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:27.394  		--rc genhtml_branch_coverage=1
00:19:27.394  		--rc genhtml_function_coverage=1
00:19:27.394  		--rc genhtml_legend=1
00:19:27.394  		--rc geninfo_all_blocks=1
00:19:27.394  		--rc geninfo_unexecuted_blocks=1
00:19:27.394  		
00:19:27.394  		'
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:27.394     04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:27.394      04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:27.394      04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:27.394      04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:27.394      04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH
00:19:27.394      04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:27.394  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:27.394    04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable
00:19:27.394   04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=()
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=()
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=()
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=()
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=()
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:29.922   04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:19:29.922  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:19:29.922  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:19:29.922  Found net devices under 0000:0a:00.0: cvl_0_0
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:19:29.922  Found net devices under 0000:0a:00.1: cvl_0_1
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:19:29.922   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:19:29.923  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:19:29.923  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms
00:19:29.923  
00:19:29.923  --- 10.0.0.2 ping statistics ---
00:19:29.923  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:29.923  rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:19:29.923  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:19:29.923  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms
00:19:29.923  
00:19:29.923  --- 10.0.0.1 ping statistics ---
00:19:29.923  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:29.923  rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=265598
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 265598
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 265598 ']'
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:29.923  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:29.923  [2024-12-09 04:09:58.253687] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:19:29.923  [2024-12-09 04:09:58.253759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:29.923  [2024-12-09 04:09:58.325542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:29.923  [2024-12-09 04:09:58.382780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:29.923  [2024-12-09 04:09:58.382852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:29.923  [2024-12-09 04:09:58.382866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:29.923  [2024-12-09 04:09:58.382877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:29.923  [2024-12-09 04:09:58.382887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:29.923  [2024-12-09 04:09:58.383507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable
00:19:29.923   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181  Malloc0
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181  [2024-12-09 04:09:58.626151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:30.181  [2024-12-09 04:09:58.650429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.181   04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:19:30.181  [2024-12-09 04:09:58.735408] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:19:32.079  Initializing NVMe Controllers
00:19:32.079  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:19:32.079  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0
00:19:32.079  Initialization complete. Launching workers.
00:19:32.079  ========================================================
00:19:32.079                                                                                                               Latency(us)
00:19:32.079  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:19:32.079  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  0:      38.86       4.86  107223.13   31912.62  191529.35
00:19:32.079  ========================================================
00:19:32.079  Total                                                                    :      38.86       4.86  107223.13   31912.62  191529.35
00:19:32.079  
00:19:32.079    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats
00:19:32.079    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry'
00:19:32.079    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:32.079    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:32.079    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:32.079   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=598
00:19:32.079   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 598 -eq 0 ]]
00:19:32.079   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:19:32.079   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini
00:19:32.079   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:32.079   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync
00:19:32.079   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:19:32.080  rmmod nvme_tcp
00:19:32.080  rmmod nvme_fabrics
00:19:32.080  rmmod nvme_keyring
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 265598 ']'
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 265598
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 265598 ']'
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 265598
00:19:32.080    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:32.080    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 265598
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 265598'
00:19:32.080  killing process with pid 265598
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 265598
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 265598
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:32.080   04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:32.080    04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:19:34.634  
00:19:34.634  real	0m6.955s
00:19:34.634  user	0m3.289s
00:19:34.634  sys	0m2.088s
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:19:34.634  ************************************
00:19:34.634  END TEST nvmf_wait_for_buf
00:19:34.634  ************************************
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']'
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]]
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']'
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable
00:19:34.634   04:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=()
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=()
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=()
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=()
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=()
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:19:36.539  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:19:36.539  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:19:36.539  Found net devices under 0000:0a:00.0: cvl_0_0
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:19:36.539  Found net devices under 0000:0a:00.1: cvl_0_1
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 ))
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:36.539   04:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:36.540   04:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:36.540  ************************************
00:19:36.540  START TEST nvmf_perf_adq
00:19:36.540  ************************************
00:19:36.540   04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp
00:19:36.540  * Looking for test storage...
00:19:36.540  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-:
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-:
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<'
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:36.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:36.540  		--rc genhtml_branch_coverage=1
00:19:36.540  		--rc genhtml_function_coverage=1
00:19:36.540  		--rc genhtml_legend=1
00:19:36.540  		--rc geninfo_all_blocks=1
00:19:36.540  		--rc geninfo_unexecuted_blocks=1
00:19:36.540  		
00:19:36.540  		'
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:36.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:36.540  		--rc genhtml_branch_coverage=1
00:19:36.540  		--rc genhtml_function_coverage=1
00:19:36.540  		--rc genhtml_legend=1
00:19:36.540  		--rc geninfo_all_blocks=1
00:19:36.540  		--rc geninfo_unexecuted_blocks=1
00:19:36.540  		
00:19:36.540  		'
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:36.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:36.540  		--rc genhtml_branch_coverage=1
00:19:36.540  		--rc genhtml_function_coverage=1
00:19:36.540  		--rc genhtml_legend=1
00:19:36.540  		--rc geninfo_all_blocks=1
00:19:36.540  		--rc geninfo_unexecuted_blocks=1
00:19:36.540  		
00:19:36.540  		'
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:36.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:36.540  		--rc genhtml_branch_coverage=1
00:19:36.540  		--rc genhtml_function_coverage=1
00:19:36.540  		--rc genhtml_legend=1
00:19:36.540  		--rc geninfo_all_blocks=1
00:19:36.540  		--rc geninfo_unexecuted_blocks=1
00:19:36.540  		
00:19:36.540  		'
00:19:36.540   04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:36.540    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:36.540     04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:36.541      04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:36.541      04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:36.541      04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:36.541      04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH
00:19:36.541      04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:36.541  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:36.541    04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:36.541   04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs
00:19:36.541   04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable
00:19:36.541   04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=()
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=()
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=()
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=()
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=()
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:19:38.442  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:19:38.442  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:19:38.442  Found net devices under 0000:0a:00.0: cvl_0_0
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:19:38.442  Found net devices under 0000:0a:00.1: cvl_0_1
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 ))
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver
00:19:38.442   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio
00:19:38.443   04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice
00:19:39.377   04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice
00:19:43.567   04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:47.778    04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=()
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:47.778   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=()
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=()
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=()
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=()
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:19:47.779  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:19:47.779  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:19:47.779  Found net devices under 0000:0a:00.0: cvl_0_0
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:19:47.779  Found net devices under 0000:0a:00.1: cvl_0_1
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:19:47.779   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:19:48.038  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:19:48.038  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms
00:19:48.038  
00:19:48.038  --- 10.0.0.2 ping statistics ---
00:19:48.038  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:48.038  rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:19:48.038  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:19:48.038  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms
00:19:48.038  
00:19:48.038  --- 10.0.0.1 ping statistics ---
00:19:48.038  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:48.038  rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=270568
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 270568
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 270568 ']'
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:48.038  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:48.038   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.038  [2024-12-09 04:10:16.486551] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:19:48.038  [2024-12-09 04:10:16.486654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:48.038  [2024-12-09 04:10:16.559601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:19:48.295  [2024-12-09 04:10:16.619674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:48.295  [2024-12-09 04:10:16.619725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:48.295  [2024-12-09 04:10:16.619748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:48.295  [2024-12-09 04:10:16.619760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:48.295  [2024-12-09 04:10:16.619770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:48.295  [2024-12-09 04:10:16.621361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:48.295  [2024-12-09 04:10:16.621387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:19:48.295  [2024-12-09 04:10:16.621411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:19:48.295  [2024-12-09 04:10:16.621414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0
00:19:48.295    04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl
00:19:48.295    04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.295    04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name
00:19:48.295    04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.295    04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.295   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.568  [2024-12-09 04:10:16.882769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.568  Malloc1
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:48.568  [2024-12-09 04:10:16.941008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=270599
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:cnode1'
00:19:48.568   04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2
00:19:50.463    04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats
00:19:50.463    04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:50.463    04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:19:50.463    04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:50.463   04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{
00:19:50.463  "tick_rate": 2700000000,
00:19:50.463  "poll_groups": [
00:19:50.463  {
00:19:50.463  "name": "nvmf_tgt_poll_group_000",
00:19:50.463  "admin_qpairs": 1,
00:19:50.463  "io_qpairs": 1,
00:19:50.463  "current_admin_qpairs": 1,
00:19:50.463  "current_io_qpairs": 1,
00:19:50.463  "pending_bdev_io": 0,
00:19:50.463  "completed_nvme_io": 18468,
00:19:50.463  "transports": [
00:19:50.463  {
00:19:50.463  "trtype": "TCP"
00:19:50.463  }
00:19:50.463  ]
00:19:50.463  },
00:19:50.463  {
00:19:50.463  "name": "nvmf_tgt_poll_group_001",
00:19:50.463  "admin_qpairs": 0,
00:19:50.463  "io_qpairs": 1,
00:19:50.463  "current_admin_qpairs": 0,
00:19:50.463  "current_io_qpairs": 1,
00:19:50.463  "pending_bdev_io": 0,
00:19:50.463  "completed_nvme_io": 19912,
00:19:50.463  "transports": [
00:19:50.463  {
00:19:50.463  "trtype": "TCP"
00:19:50.463  }
00:19:50.463  ]
00:19:50.463  },
00:19:50.463  {
00:19:50.463  "name": "nvmf_tgt_poll_group_002",
00:19:50.463  "admin_qpairs": 0,
00:19:50.463  "io_qpairs": 1,
00:19:50.463  "current_admin_qpairs": 0,
00:19:50.463  "current_io_qpairs": 1,
00:19:50.463  "pending_bdev_io": 0,
00:19:50.463  "completed_nvme_io": 19982,
00:19:50.463  "transports": [
00:19:50.463  {
00:19:50.463  "trtype": "TCP"
00:19:50.463  }
00:19:50.463  ]
00:19:50.463  },
00:19:50.463  {
00:19:50.463  "name": "nvmf_tgt_poll_group_003",
00:19:50.463  "admin_qpairs": 0,
00:19:50.463  "io_qpairs": 1,
00:19:50.463  "current_admin_qpairs": 0,
00:19:50.463  "current_io_qpairs": 1,
00:19:50.463  "pending_bdev_io": 0,
00:19:50.463  "completed_nvme_io": 19652,
00:19:50.463  "transports": [
00:19:50.463  {
00:19:50.463  "trtype": "TCP"
00:19:50.463  }
00:19:50.463  ]
00:19:50.463  }
00:19:50.463  ]
00:19:50.463  }'
00:19:50.463    04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length'
00:19:50.463    04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l
00:19:50.463   04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4
00:19:50.463   04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]]
00:19:50.463   04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 270599
00:19:58.571  Initializing NVMe Controllers
00:19:58.571  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:19:58.571  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4
00:19:58.572  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5
00:19:58.572  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6
00:19:58.572  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7
00:19:58.572  Initialization complete. Launching workers.
00:19:58.572  ========================================================
00:19:58.572                                                                                                               Latency(us)
00:19:58.572  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:19:58.572  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  4:   10368.30      40.50    6172.58    1972.76   10503.29
00:19:58.572  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  5:   10514.70      41.07    6088.28    2453.35   10583.66
00:19:58.572  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  6:   10552.20      41.22    6065.59    2391.77    9592.99
00:19:58.572  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  7:    9713.00      37.94    6589.16    2639.28   11347.30
00:19:58.572  ========================================================
00:19:58.572  Total                                                                    :   41148.18     160.74    6221.94    1972.76   11347.30
00:19:58.572  
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:19:58.572  rmmod nvme_tcp
00:19:58.572  rmmod nvme_fabrics
00:19:58.572  rmmod nvme_keyring
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 270568 ']'
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 270568
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 270568 ']'
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 270568
00:19:58.572    04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname
00:19:58.572   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:58.572    04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270568
00:19:58.829   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:58.829   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:58.829   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270568'
00:19:58.829  killing process with pid 270568
00:19:58.829   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 270568
00:19:58.829   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 270568
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:19:59.087   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns
00:19:59.088   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:59.088   04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:59.088    04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:00.989   04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:20:00.989   04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver
00:20:00.989   04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio
00:20:00.989   04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice
00:20:01.925   04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice
00:20:04.455   04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:09.728    04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=()
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=()
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=()
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=()
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=()
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:09.728   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:20:09.729  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:20:09.729  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:20:09.729  Found net devices under 0000:0a:00.0: cvl_0_0
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:20:09.729  Found net devices under 0000:0a:00.1: cvl_0_1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:20:09.729  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:09.729  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms
00:20:09.729  
00:20:09.729  --- 10.0.0.2 ping statistics ---
00:20:09.729  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:09.729  rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:20:09.729  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:09.729  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms
00:20:09.729  
00:20:09.729  --- 10.0.0.1 ping statistics ---
00:20:09.729  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:09.729  rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1
00:20:09.729  net.core.busy_poll = 1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1
00:20:09.729  net.core.busy_read = 1
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress
00:20:09.729   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=273228
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 273228
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 273228 ']'
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:09.730  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:09.730   04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.730  [2024-12-09 04:10:37.901038] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:09.730  [2024-12-09 04:10:37.901119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:09.730  [2024-12-09 04:10:37.976837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:09.730  [2024-12-09 04:10:38.033831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:09.730  [2024-12-09 04:10:38.033887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:09.730  [2024-12-09 04:10:38.033910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:20:09.730  [2024-12-09 04:10:38.033921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:20:09.730  [2024-12-09 04:10:38.033931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:09.730  [2024-12-09 04:10:38.035358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:09.730  [2024-12-09 04:10:38.035423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:20:09.730  [2024-12-09 04:10:38.035485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:20:09.730  [2024-12-09 04:10:38.035489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1
00:20:09.730    04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl
00:20:09.730    04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name
00:20:09.730    04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.730    04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.730    04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.730   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.988  [2024-12-09 04:10:38.311085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.988   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.989  Malloc1
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:09.989  [2024-12-09 04:10:38.377474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=273372
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2
00:20:09.989   04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:cnode1'
00:20:11.889    04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats
00:20:11.889    04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.889    04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:11.889    04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.889   04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{
00:20:11.889  "tick_rate": 2700000000,
00:20:11.889  "poll_groups": [
00:20:11.889  {
00:20:11.889  "name": "nvmf_tgt_poll_group_000",
00:20:11.889  "admin_qpairs": 1,
00:20:11.889  "io_qpairs": 1,
00:20:11.889  "current_admin_qpairs": 1,
00:20:11.889  "current_io_qpairs": 1,
00:20:11.889  "pending_bdev_io": 0,
00:20:11.889  "completed_nvme_io": 25025,
00:20:11.889  "transports": [
00:20:11.889  {
00:20:11.889  "trtype": "TCP"
00:20:11.889  }
00:20:11.889  ]
00:20:11.889  },
00:20:11.889  {
00:20:11.889  "name": "nvmf_tgt_poll_group_001",
00:20:11.889  "admin_qpairs": 0,
00:20:11.889  "io_qpairs": 3,
00:20:11.889  "current_admin_qpairs": 0,
00:20:11.889  "current_io_qpairs": 3,
00:20:11.889  "pending_bdev_io": 0,
00:20:11.889  "completed_nvme_io": 24837,
00:20:11.889  "transports": [
00:20:11.889  {
00:20:11.889  "trtype": "TCP"
00:20:11.889  }
00:20:11.889  ]
00:20:11.889  },
00:20:11.889  {
00:20:11.889  "name": "nvmf_tgt_poll_group_002",
00:20:11.889  "admin_qpairs": 0,
00:20:11.889  "io_qpairs": 0,
00:20:11.889  "current_admin_qpairs": 0,
00:20:11.889  "current_io_qpairs": 0,
00:20:11.889  "pending_bdev_io": 0,
00:20:11.889  "completed_nvme_io": 0,
00:20:11.889  "transports": [
00:20:11.889  {
00:20:11.889  "trtype": "TCP"
00:20:11.889  }
00:20:11.889  ]
00:20:11.889  },
00:20:11.889  {
00:20:11.889  "name": "nvmf_tgt_poll_group_003",
00:20:11.889  "admin_qpairs": 0,
00:20:11.889  "io_qpairs": 0,
00:20:11.889  "current_admin_qpairs": 0,
00:20:11.889  "current_io_qpairs": 0,
00:20:11.889  "pending_bdev_io": 0,
00:20:11.889  "completed_nvme_io": 0,
00:20:11.889  "transports": [
00:20:11.889  {
00:20:11.889  "trtype": "TCP"
00:20:11.889  }
00:20:11.889  ]
00:20:11.889  }
00:20:11.889  ]
00:20:11.889  }'
00:20:11.889    04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length'
00:20:11.889    04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l
00:20:11.889   04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2
00:20:11.889   04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]]
00:20:11.889   04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 273372
00:20:19.999  Initializing NVMe Controllers
00:20:19.999  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:19.999  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4
00:20:19.999  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5
00:20:19.999  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6
00:20:19.999  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7
00:20:19.999  Initialization complete. Launching workers.
00:20:19.999  ========================================================
00:20:19.999                                                                                                               Latency(us)
00:20:19.999  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:20:19.999  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  4:    4294.90      16.78   14915.66    1706.64   61750.68
00:20:19.999  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  5:    4388.80      17.14   14596.14    2585.73   62153.85
00:20:19.999  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  6:    4590.00      17.93   13955.61    1687.12   61510.76
00:20:19.999  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  7:   13405.00      52.36    4774.37    2920.53   45613.32
00:20:19.999  ========================================================
00:20:19.999  Total                                                                    :   26678.69     104.21    9602.32    1687.12   62153.85
00:20:19.999  
00:20:19.999   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini
00:20:19.999   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:19.999   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync
00:20:19.999   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:20:19.999   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e
00:20:19.999   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:19.999   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:20:19.999  rmmod nvme_tcp
00:20:19.999  rmmod nvme_fabrics
00:20:19.999  rmmod nvme_keyring
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 273228 ']'
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 273228
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 273228 ']'
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 273228
00:20:20.256    04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:20.256    04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273228
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273228'
00:20:20.256  killing process with pid 273228
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 273228
00:20:20.256   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 273228
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:20.515   04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:20.515    04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT
00:20:23.809  
00:20:23.809  real	0m47.189s
00:20:23.809  user	2m39.663s
00:20:23.809  sys	0m10.695s
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:20:23.809  ************************************
00:20:23.809  END TEST nvmf_perf_adq
00:20:23.809  ************************************
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:20:23.809  ************************************
00:20:23.809  START TEST nvmf_shutdown
00:20:23.809  ************************************
00:20:23.809   04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp
00:20:23.809  * Looking for test storage...
00:20:23.809  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:23.809     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version
00:20:23.809     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:20:23.809    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:23.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:23.810  		--rc genhtml_branch_coverage=1
00:20:23.810  		--rc genhtml_function_coverage=1
00:20:23.810  		--rc genhtml_legend=1
00:20:23.810  		--rc geninfo_all_blocks=1
00:20:23.810  		--rc geninfo_unexecuted_blocks=1
00:20:23.810  		
00:20:23.810  		'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:23.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:23.810  		--rc genhtml_branch_coverage=1
00:20:23.810  		--rc genhtml_function_coverage=1
00:20:23.810  		--rc genhtml_legend=1
00:20:23.810  		--rc geninfo_all_blocks=1
00:20:23.810  		--rc geninfo_unexecuted_blocks=1
00:20:23.810  		
00:20:23.810  		'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:23.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:23.810  		--rc genhtml_branch_coverage=1
00:20:23.810  		--rc genhtml_function_coverage=1
00:20:23.810  		--rc genhtml_legend=1
00:20:23.810  		--rc geninfo_all_blocks=1
00:20:23.810  		--rc geninfo_unexecuted_blocks=1
00:20:23.810  		
00:20:23.810  		'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:23.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:23.810  		--rc genhtml_branch_coverage=1
00:20:23.810  		--rc genhtml_function_coverage=1
00:20:23.810  		--rc genhtml_legend=1
00:20:23.810  		--rc geninfo_all_blocks=1
00:20:23.810  		--rc geninfo_unexecuted_blocks=1
00:20:23.810  		
00:20:23.810  		'
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:23.810     04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:23.810      04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:23.810      04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:23.810      04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:23.810      04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH
00:20:23.810      04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:20:23.810  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:20:23.810    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:20:23.810  ************************************
00:20:23.810  START TEST nvmf_shutdown_tc1
00:20:23.810  ************************************
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:23.810   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:23.811   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:23.811   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:23.811    04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:23.811   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:23.811   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:23.811   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable
00:20:23.811   04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:25.712   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:25.712   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=()
00:20:25.712   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:25.712   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=()
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=()
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=()
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=()
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:20:25.713  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:20:25.713  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:20:25.713  Found net devices under 0000:0a:00.0: cvl_0_0
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:20:25.713  Found net devices under 0000:0a:00.1: cvl_0_1
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:25.713   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:20:25.714  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:25.714  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms
00:20:25.714  
00:20:25.714  --- 10.0.0.2 ping statistics ---
00:20:25.714  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:25.714  rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:20:25.714  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:25.714  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms
00:20:25.714  
00:20:25.714  --- 10.0.0.1 ping statistics ---
00:20:25.714  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:25.714  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=276686
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 276686
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 276686 ']'
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:25.714  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:25.714   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:25.971  [2024-12-09 04:10:54.334976] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:25.971  [2024-12-09 04:10:54.335051] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:25.971  [2024-12-09 04:10:54.407444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:25.971  [2024-12-09 04:10:54.463112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:25.971  [2024-12-09 04:10:54.463173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:25.971  [2024-12-09 04:10:54.463193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:20:25.971  [2024-12-09 04:10:54.463204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:20:25.971  [2024-12-09 04:10:54.463213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:25.971  [2024-12-09 04:10:54.464796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:20:25.971  [2024-12-09 04:10:54.464861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:20:25.971  [2024-12-09 04:10:54.464976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:20:25.971  [2024-12-09 04:10:54.464980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:26.229  [2024-12-09 04:10:54.614608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:26.229   04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:26.229  Malloc1
00:20:26.229  [2024-12-09 04:10:54.718532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:26.229  Malloc2
00:20:26.229  Malloc3
00:20:26.487  Malloc4
00:20:26.487  Malloc5
00:20:26.487  Malloc6
00:20:26.487  Malloc7
00:20:26.487  Malloc8
00:20:26.744  Malloc9
00:20:26.744  Malloc10
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=276861
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 276861 /var/tmp/bdevperf.sock
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 276861 ']'
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:20:26.744  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=()
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config
00:20:26.744   04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.744      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.744      "hdgst": ${hdgst:-false},
00:20:26.744      "ddgst": ${ddgst:-false}
00:20:26.744    },
00:20:26.744    "method": "bdev_nvme_attach_controller"
00:20:26.744  }
00:20:26.744  EOF
00:20:26.744  )")
00:20:26.744     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.744      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.744      "hdgst": ${hdgst:-false},
00:20:26.744      "ddgst": ${ddgst:-false}
00:20:26.744    },
00:20:26.744    "method": "bdev_nvme_attach_controller"
00:20:26.744  }
00:20:26.744  EOF
00:20:26.744  )")
00:20:26.744     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.744      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.744      "hdgst": ${hdgst:-false},
00:20:26.744      "ddgst": ${ddgst:-false}
00:20:26.744    },
00:20:26.744    "method": "bdev_nvme_attach_controller"
00:20:26.744  }
00:20:26.744  EOF
00:20:26.744  )")
00:20:26.744     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.744      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.744      "hdgst": ${hdgst:-false},
00:20:26.744      "ddgst": ${ddgst:-false}
00:20:26.744    },
00:20:26.744    "method": "bdev_nvme_attach_controller"
00:20:26.744  }
00:20:26.744  EOF
00:20:26.744  )")
00:20:26.744     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.744      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.744      "hdgst": ${hdgst:-false},
00:20:26.744      "ddgst": ${ddgst:-false}
00:20:26.744    },
00:20:26.744    "method": "bdev_nvme_attach_controller"
00:20:26.744  }
00:20:26.744  EOF
00:20:26.744  )")
00:20:26.744     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.744      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.744      "hdgst": ${hdgst:-false},
00:20:26.744      "ddgst": ${ddgst:-false}
00:20:26.744    },
00:20:26.744    "method": "bdev_nvme_attach_controller"
00:20:26.744  }
00:20:26.744  EOF
00:20:26.744  )")
00:20:26.744     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.744      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.744      "hdgst": ${hdgst:-false},
00:20:26.744      "ddgst": ${ddgst:-false}
00:20:26.744    },
00:20:26.744    "method": "bdev_nvme_attach_controller"
00:20:26.744  }
00:20:26.744  EOF
00:20:26.744  )")
00:20:26.744     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.744    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.744  {
00:20:26.744    "params": {
00:20:26.744      "name": "Nvme$subsystem",
00:20:26.744      "trtype": "$TEST_TRANSPORT",
00:20:26.744      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.744      "adrfam": "ipv4",
00:20:26.744      "trsvcid": "$NVMF_PORT",
00:20:26.744      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.745      "hdgst": ${hdgst:-false},
00:20:26.745      "ddgst": ${ddgst:-false}
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  }
00:20:26.745  EOF
00:20:26.745  )")
00:20:26.745     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.745    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.745    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.745  {
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme$subsystem",
00:20:26.745      "trtype": "$TEST_TRANSPORT",
00:20:26.745      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "$NVMF_PORT",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.745      "hdgst": ${hdgst:-false},
00:20:26.745      "ddgst": ${ddgst:-false}
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  }
00:20:26.745  EOF
00:20:26.745  )")
00:20:26.745     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.745    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:26.745    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:26.745  {
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme$subsystem",
00:20:26.745      "trtype": "$TEST_TRANSPORT",
00:20:26.745      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "$NVMF_PORT",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:26.745      "hdgst": ${hdgst:-false},
00:20:26.745      "ddgst": ${ddgst:-false}
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  }
00:20:26.745  EOF
00:20:26.745  )")
00:20:26.745     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:26.745    04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq .
00:20:26.745     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=,
00:20:26.745     04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme1",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme2",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme3",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme4",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme5",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme6",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme7",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme8",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme9",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  },{
00:20:26.745    "params": {
00:20:26.745      "name": "Nvme10",
00:20:26.745      "trtype": "tcp",
00:20:26.745      "traddr": "10.0.0.2",
00:20:26.745      "adrfam": "ipv4",
00:20:26.745      "trsvcid": "4420",
00:20:26.745      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:20:26.745      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:20:26.745      "hdgst": false,
00:20:26.745      "ddgst": false
00:20:26.745    },
00:20:26.745    "method": "bdev_nvme_attach_controller"
00:20:26.745  }'
00:20:26.745  [2024-12-09 04:10:55.244734] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:26.745  [2024-12-09 04:10:55.244809] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:20:26.745  [2024-12-09 04:10:55.317368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:27.002  [2024-12-09 04:10:55.376694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 276861
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1
00:20:28.897   04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1
00:20:29.829  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 276861 Killed                  $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}")
00:20:29.829   04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 276686
00:20:29.829   04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=()
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:29.830  {
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme$subsystem",
00:20:29.830      "trtype": "$TEST_TRANSPORT",
00:20:29.830      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:29.830      "adrfam": "ipv4",
00:20:29.830      "trsvcid": "$NVMF_PORT",
00:20:29.830      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:29.830      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:29.830      "hdgst": ${hdgst:-false},
00:20:29.830      "ddgst": ${ddgst:-false}
00:20:29.830    },
00:20:29.830    "method": "bdev_nvme_attach_controller"
00:20:29.830  }
00:20:29.830  EOF
00:20:29.830  )")
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:20:29.830    04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq .
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=,
00:20:29.830     04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:20:29.830    "params": {
00:20:29.830      "name": "Nvme1",
00:20:29.830      "trtype": "tcp",
00:20:29.830      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme2",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme3",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme4",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme5",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme6",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme7",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme8",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme9",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  },{
00:20:29.831    "params": {
00:20:29.831      "name": "Nvme10",
00:20:29.831      "trtype": "tcp",
00:20:29.831      "traddr": "10.0.0.2",
00:20:29.831      "adrfam": "ipv4",
00:20:29.831      "trsvcid": "4420",
00:20:29.831      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:20:29.831      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:20:29.831      "hdgst": false,
00:20:29.831      "ddgst": false
00:20:29.831    },
00:20:29.831    "method": "bdev_nvme_attach_controller"
00:20:29.831  }'
00:20:29.831  [2024-12-09 04:10:58.317637] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:29.831  [2024-12-09 04:10:58.317721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277168 ]
00:20:29.831  [2024-12-09 04:10:58.393016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:30.089  [2024-12-09 04:10:58.454390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:31.463  Running I/O for 1 seconds...
00:20:32.397       1800.00 IOPS,   112.50 MiB/s
00:20:32.397                                                                                                  Latency(us)
00:20:32.397  
[2024-12-09T03:11:00.973Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:32.397  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme1n1             :       1.09     234.62      14.66       0.00     0.00  265725.91   20486.07  253211.69
00:20:32.397  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme2n1             :       1.15     226.79      14.17       0.00     0.00  274357.04    4102.07  243891.01
00:20:32.397  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme3n1             :       1.11     230.95      14.43       0.00     0.00  265145.84   19418.07  251658.24
00:20:32.397  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme4n1             :       1.10     232.17      14.51       0.00     0.00  258559.05   23787.14  236123.78
00:20:32.397  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme5n1             :       1.15     222.45      13.90       0.00     0.00  266694.16   19806.44  259425.47
00:20:32.397  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme6n1             :       1.13     225.61      14.10       0.00     0.00  254663.87   19029.71  257872.02
00:20:32.397  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme7n1             :       1.16     221.04      13.81       0.00     0.00  259365.74   18738.44  268746.15
00:20:32.397  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme8n1             :       1.17     273.61      17.10       0.00     0.00  205746.78    6747.78  253211.69
00:20:32.397  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme9n1             :       1.16     220.18      13.76       0.00     0.00  251539.15   21359.88  284280.60
00:20:32.397  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:32.397  	 Verification LBA range: start 0x0 length 0x400
00:20:32.397  	 Nvme10n1            :       1.21     212.20      13.26       0.00     0.00  248715.38   22524.97  268746.15
00:20:32.397  
[2024-12-09T03:11:00.973Z]  ===================================================================================================================
00:20:32.397  
[2024-12-09T03:11:00.973Z]  Total                       :               2299.62     143.73       0.00     0.00  253879.96    4102.07  284280.60
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:20:32.656  rmmod nvme_tcp
00:20:32.656  rmmod nvme_fabrics
00:20:32.656  rmmod nvme_keyring
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 276686 ']'
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 276686
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 276686 ']'
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 276686
00:20:32.656    04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:32.656    04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276686
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276686'
00:20:32.656  killing process with pid 276686
00:20:32.656   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 276686
00:20:32.657   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 276686
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:33.223   04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:33.223    04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:20:35.762  
00:20:35.762  real	0m11.574s
00:20:35.762  user	0m33.368s
00:20:35.762  sys	0m3.157s
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:20:35.762  ************************************
00:20:35.762  END TEST nvmf_shutdown_tc1
00:20:35.762  ************************************
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:20:35.762  ************************************
00:20:35.762  START TEST nvmf_shutdown_tc2
00:20:35.762  ************************************
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:35.762    04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=()
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=()
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=()
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=()
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=()
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:20:35.762  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:35.762   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:20:35.762  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:20:35.763  Found net devices under 0000:0a:00.0: cvl_0_0
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:20:35.763  Found net devices under 0000:0a:00.1: cvl_0_1
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:20:35.763  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:35.763  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms
00:20:35.763  
00:20:35.763  --- 10.0.0.2 ping statistics ---
00:20:35.763  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:35.763  rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:20:35.763  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:35.763  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms
00:20:35.763  
00:20:35.763  --- 10.0.0.1 ping statistics ---
00:20:35.763  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:35.763  rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=277930
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 277930
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 277930 ']'
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:35.763  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:35.763   04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:35.763  [2024-12-09 04:11:03.997110] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:35.763  [2024-12-09 04:11:03.997185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:35.763  [2024-12-09 04:11:04.076094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:35.763  [2024-12-09 04:11:04.135140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:35.763  [2024-12-09 04:11:04.135193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:35.763  [2024-12-09 04:11:04.135216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:20:35.763  [2024-12-09 04:11:04.135227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:20:35.763  [2024-12-09 04:11:04.135237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:35.763  [2024-12-09 04:11:04.136815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:20:35.763  [2024-12-09 04:11:04.136859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:20:35.763  [2024-12-09 04:11:04.136917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:20:35.764  [2024-12-09 04:11:04.136920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:35.764  [2024-12-09 04:11:04.287769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.764   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:36.021  Malloc1
00:20:36.021  [2024-12-09 04:11:04.393756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:36.021  Malloc2
00:20:36.021  Malloc3
00:20:36.021  Malloc4
00:20:36.021  Malloc5
00:20:36.278  Malloc6
00:20:36.278  Malloc7
00:20:36.278  Malloc8
00:20:36.278  Malloc9
00:20:36.278  Malloc10
00:20:36.278   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:36.278   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:20:36.278   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:36.278   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=278107
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 278107 /var/tmp/bdevperf.sock
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 278107 ']'
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=()
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:20:36.536  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.536   04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.536  {
00:20:36.536    "params": {
00:20:36.536      "name": "Nvme$subsystem",
00:20:36.536      "trtype": "$TEST_TRANSPORT",
00:20:36.536      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.536      "adrfam": "ipv4",
00:20:36.536      "trsvcid": "$NVMF_PORT",
00:20:36.536      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.536      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.536      "hdgst": ${hdgst:-false},
00:20:36.536      "ddgst": ${ddgst:-false}
00:20:36.536    },
00:20:36.536    "method": "bdev_nvme_attach_controller"
00:20:36.536  }
00:20:36.536  EOF
00:20:36.536  )")
00:20:36.536     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.536  {
00:20:36.536    "params": {
00:20:36.536      "name": "Nvme$subsystem",
00:20:36.536      "trtype": "$TEST_TRANSPORT",
00:20:36.536      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.536      "adrfam": "ipv4",
00:20:36.536      "trsvcid": "$NVMF_PORT",
00:20:36.536      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.536      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.536      "hdgst": ${hdgst:-false},
00:20:36.536      "ddgst": ${ddgst:-false}
00:20:36.536    },
00:20:36.536    "method": "bdev_nvme_attach_controller"
00:20:36.536  }
00:20:36.536  EOF
00:20:36.536  )")
00:20:36.536     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.536  {
00:20:36.536    "params": {
00:20:36.536      "name": "Nvme$subsystem",
00:20:36.536      "trtype": "$TEST_TRANSPORT",
00:20:36.536      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.536      "adrfam": "ipv4",
00:20:36.536      "trsvcid": "$NVMF_PORT",
00:20:36.536      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.536      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.536      "hdgst": ${hdgst:-false},
00:20:36.536      "ddgst": ${ddgst:-false}
00:20:36.536    },
00:20:36.536    "method": "bdev_nvme_attach_controller"
00:20:36.536  }
00:20:36.536  EOF
00:20:36.536  )")
00:20:36.536     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.536    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.536  {
00:20:36.536    "params": {
00:20:36.536      "name": "Nvme$subsystem",
00:20:36.536      "trtype": "$TEST_TRANSPORT",
00:20:36.536      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.536      "adrfam": "ipv4",
00:20:36.536      "trsvcid": "$NVMF_PORT",
00:20:36.536      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.536      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.536      "hdgst": ${hdgst:-false},
00:20:36.536      "ddgst": ${ddgst:-false}
00:20:36.536    },
00:20:36.536    "method": "bdev_nvme_attach_controller"
00:20:36.536  }
00:20:36.536  EOF
00:20:36.536  )")
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.537  {
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme$subsystem",
00:20:36.537      "trtype": "$TEST_TRANSPORT",
00:20:36.537      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "$NVMF_PORT",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.537      "hdgst": ${hdgst:-false},
00:20:36.537      "ddgst": ${ddgst:-false}
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  }
00:20:36.537  EOF
00:20:36.537  )")
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.537  {
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme$subsystem",
00:20:36.537      "trtype": "$TEST_TRANSPORT",
00:20:36.537      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "$NVMF_PORT",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.537      "hdgst": ${hdgst:-false},
00:20:36.537      "ddgst": ${ddgst:-false}
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  }
00:20:36.537  EOF
00:20:36.537  )")
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.537  {
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme$subsystem",
00:20:36.537      "trtype": "$TEST_TRANSPORT",
00:20:36.537      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "$NVMF_PORT",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.537      "hdgst": ${hdgst:-false},
00:20:36.537      "ddgst": ${ddgst:-false}
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  }
00:20:36.537  EOF
00:20:36.537  )")
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.537  {
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme$subsystem",
00:20:36.537      "trtype": "$TEST_TRANSPORT",
00:20:36.537      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "$NVMF_PORT",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.537      "hdgst": ${hdgst:-false},
00:20:36.537      "ddgst": ${ddgst:-false}
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  }
00:20:36.537  EOF
00:20:36.537  )")
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.537  {
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme$subsystem",
00:20:36.537      "trtype": "$TEST_TRANSPORT",
00:20:36.537      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "$NVMF_PORT",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.537      "hdgst": ${hdgst:-false},
00:20:36.537      "ddgst": ${ddgst:-false}
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  }
00:20:36.537  EOF
00:20:36.537  )")
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:36.537  {
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme$subsystem",
00:20:36.537      "trtype": "$TEST_TRANSPORT",
00:20:36.537      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "$NVMF_PORT",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:36.537      "hdgst": ${hdgst:-false},
00:20:36.537      "ddgst": ${ddgst:-false}
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  }
00:20:36.537  EOF
00:20:36.537  )")
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:20:36.537    04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq .
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=,
00:20:36.537     04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme1",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme2",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme3",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme4",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme5",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme6",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme7",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme8",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme9",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  },{
00:20:36.537    "params": {
00:20:36.537      "name": "Nvme10",
00:20:36.537      "trtype": "tcp",
00:20:36.537      "traddr": "10.0.0.2",
00:20:36.537      "adrfam": "ipv4",
00:20:36.537      "trsvcid": "4420",
00:20:36.537      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:20:36.537      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:20:36.537      "hdgst": false,
00:20:36.537      "ddgst": false
00:20:36.537    },
00:20:36.537    "method": "bdev_nvme_attach_controller"
00:20:36.537  }'
00:20:36.537  [2024-12-09 04:11:04.921731] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:36.537  [2024-12-09 04:11:04.921817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278107 ]
00:20:36.537  [2024-12-09 04:11:04.992864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:36.537  [2024-12-09 04:11:05.052240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:38.431  Running I/O for 10 seconds...
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']'
00:20:38.688   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1
00:20:38.689   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i
00:20:38.689   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 ))
00:20:38.689   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:20:38.689    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:20:38.689    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:20:38.689    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:38.689    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:38.689    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:38.689   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3
00:20:38.689   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']'
00:20:38.689   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25
00:20:38.946   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- ))
00:20:38.946   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:20:38.946    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:20:38.946    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:20:38.946    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:38.946    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:38.946    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:38.946   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67
00:20:38.946   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']'
00:20:38.946   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- ))
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:20:39.203    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:20:39.203    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:20:39.203    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.203    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:39.203    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']'
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 278107
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 278107 ']'
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 278107
00:20:39.203    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:39.203    04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278107
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278107'
00:20:39.203  killing process with pid 278107
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 278107
00:20:39.203   04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 278107
00:20:39.460  Received shutdown signal, test time was about 0.951401 seconds
00:20:39.460  
00:20:39.460                                                                                                  Latency(us)
00:20:39.460  
[2024-12-09T03:11:08.036Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:39.460  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme1n1             :       0.95     269.31      16.83       0.00     0.00  234875.26   20874.43  254765.13
00:20:39.460  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme2n1             :       0.94     271.69      16.98       0.00     0.00  227777.23   23592.96  240784.12
00:20:39.460  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme3n1             :       0.95     270.36      16.90       0.00     0.00  224888.79   18252.99  256318.58
00:20:39.460  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme4n1             :       0.93     278.29      17.39       0.00     0.00  212580.77    4927.34  250104.79
00:20:39.460  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme5n1             :       0.93     206.08      12.88       0.00     0.00  282668.50   39418.69  267192.70
00:20:39.460  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme6n1             :       0.92     208.27      13.02       0.00     0.00  273368.30   20680.25  245444.46
00:20:39.460  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme7n1             :       0.91     210.09      13.13       0.00     0.00  262992.53   33787.45  229910.00
00:20:39.460  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme8n1             :       0.91     211.07      13.19       0.00     0.00  257421.46   18252.99  250104.79
00:20:39.460  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme9n1             :       0.94     204.81      12.80       0.00     0.00  260848.96   22913.33  281173.71
00:20:39.460  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:39.460  	 Verification LBA range: start 0x0 length 0x400
00:20:39.460  	 Nvme10n1            :       0.91     215.91      13.49       0.00     0.00  236861.88    2633.58  243891.01
00:20:39.460  
[2024-12-09T03:11:08.036Z]  ===================================================================================================================
00:20:39.460  
[2024-12-09T03:11:08.036Z]  Total                       :               2345.89     146.62       0.00     0.00  244716.21    2633.58  281173.71
00:20:39.716   04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 277930
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:20:40.648  rmmod nvme_tcp
00:20:40.648  rmmod nvme_fabrics
00:20:40.648  rmmod nvme_keyring
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 277930 ']'
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 277930
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 277930 ']'
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 277930
00:20:40.648    04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:40.648    04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277930
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277930'
00:20:40.648  killing process with pid 277930
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 277930
00:20:40.648   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 277930
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:41.214   04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:41.214    04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:43.122   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:20:43.122  
00:20:43.122  real	0m7.875s
00:20:43.122  user	0m24.585s
00:20:43.122  sys	0m1.427s
00:20:43.122   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:43.122   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:20:43.122  ************************************
00:20:43.122  END TEST nvmf_shutdown_tc2
00:20:43.122  ************************************
00:20:43.122   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3
00:20:43.122   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:43.122   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:43.122   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:20:43.383  ************************************
00:20:43.383  START TEST nvmf_shutdown_tc3
00:20:43.383  ************************************
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:43.383    04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=()
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=()
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=()
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=()
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=()
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:20:43.383  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:20:43.383  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:43.383   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:20:43.384  Found net devices under 0000:0a:00.0: cvl_0_0
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:20:43.384  Found net devices under 0000:0a:00.1: cvl_0_1
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:20:43.384  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:43.384  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms
00:20:43.384  
00:20:43.384  --- 10.0.0.2 ping statistics ---
00:20:43.384  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:43.384  rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:20:43.384  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:43.384  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms
00:20:43.384  
00:20:43.384  --- 10.0.0.1 ping statistics ---
00:20:43.384  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:43.384  rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=279025
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 279025
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 279025 ']'
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:43.384  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:43.384   04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:43.384  [2024-12-09 04:11:11.936745] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:43.384  [2024-12-09 04:11:11.936842] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:43.643  [2024-12-09 04:11:12.009323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:43.643  [2024-12-09 04:11:12.067963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:43.643  [2024-12-09 04:11:12.068036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:43.643  [2024-12-09 04:11:12.068049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:20:43.643  [2024-12-09 04:11:12.068060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:20:43.643  [2024-12-09 04:11:12.068068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:43.643  [2024-12-09 04:11:12.069684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:20:43.643  [2024-12-09 04:11:12.069744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:20:43.643  [2024-12-09 04:11:12.069809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:20:43.643  [2024-12-09 04:11:12.069812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:20:43.643   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:43.644   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:43.902  [2024-12-09 04:11:12.220476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:43.902   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:43.902  Malloc1
00:20:43.902  [2024-12-09 04:11:12.322592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:43.902  Malloc2
00:20:43.902  Malloc3
00:20:43.902  Malloc4
00:20:44.160  Malloc5
00:20:44.160  Malloc6
00:20:44.160  Malloc7
00:20:44.160  Malloc8
00:20:44.160  Malloc9
00:20:44.160  Malloc10
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=279205
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 279205 /var/tmp/bdevperf.sock
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 279205 ']'
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=()
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:20:44.422  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422   04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.422    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.422  {
00:20:44.422    "params": {
00:20:44.422      "name": "Nvme$subsystem",
00:20:44.422      "trtype": "$TEST_TRANSPORT",
00:20:44.422      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.422      "adrfam": "ipv4",
00:20:44.422      "trsvcid": "$NVMF_PORT",
00:20:44.422      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.422      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.422      "hdgst": ${hdgst:-false},
00:20:44.422      "ddgst": ${ddgst:-false}
00:20:44.422    },
00:20:44.422    "method": "bdev_nvme_attach_controller"
00:20:44.422  }
00:20:44.422  EOF
00:20:44.422  )")
00:20:44.422     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.423    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.423    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.423  {
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme$subsystem",
00:20:44.423      "trtype": "$TEST_TRANSPORT",
00:20:44.423      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "$NVMF_PORT",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.423      "hdgst": ${hdgst:-false},
00:20:44.423      "ddgst": ${ddgst:-false}
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  }
00:20:44.423  EOF
00:20:44.423  )")
00:20:44.423     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.423    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:20:44.423    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:20:44.423  {
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme$subsystem",
00:20:44.423      "trtype": "$TEST_TRANSPORT",
00:20:44.423      "traddr": "$NVMF_FIRST_TARGET_IP",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "$NVMF_PORT",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:20:44.423      "hdgst": ${hdgst:-false},
00:20:44.423      "ddgst": ${ddgst:-false}
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  }
00:20:44.423  EOF
00:20:44.423  )")
00:20:44.423     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:20:44.423    04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq .
00:20:44.423     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=,
00:20:44.423     04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme1",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme2",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme3",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme4",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme5",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme6",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme7",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme8",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme9",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  },{
00:20:44.423    "params": {
00:20:44.423      "name": "Nvme10",
00:20:44.423      "trtype": "tcp",
00:20:44.423      "traddr": "10.0.0.2",
00:20:44.423      "adrfam": "ipv4",
00:20:44.423      "trsvcid": "4420",
00:20:44.423      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:20:44.423      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:20:44.423      "hdgst": false,
00:20:44.423      "ddgst": false
00:20:44.423    },
00:20:44.423    "method": "bdev_nvme_attach_controller"
00:20:44.423  }'
00:20:44.423  [2024-12-09 04:11:12.819305] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:44.423  [2024-12-09 04:11:12.819384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279205 ]
00:20:44.423  [2024-12-09 04:11:12.892335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:44.423  [2024-12-09 04:11:12.951537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:45.795  Running I/O for 10 seconds...
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']'
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 ))
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:20:46.361    04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:20:46.361    04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:20:46.361    04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.361    04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:46.361    04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']'
00:20:46.361   04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- ))
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:20:46.618    04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:20:46.618    04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:20:46.618    04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.618    04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:46.618    04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']'
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 279025
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 279025 ']'
00:20:46.618   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 279025
00:20:46.618    04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname
00:20:46.619   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:46.619    04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279025
00:20:46.887   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:20:46.887   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:20:46.887   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279025'
00:20:46.887  killing process with pid 279025
00:20:46.887   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 279025
00:20:46.887   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 279025
00:20:46.887  [2024-12-09 04:11:15.229813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bad30 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.229984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bad30 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.230002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bad30 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.887  [2024-12-09 04:11:15.234911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.234925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.234938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.234951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.234963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.234976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.234989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.235255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.236995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.888  [2024-12-09 04:11:15.237519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.237531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.237553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.237564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.237576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.237606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.237619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.237631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.239989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.240000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.240014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.240025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.240037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.240048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.240059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.240070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.889  [2024-12-09 04:11:15.241510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.241995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.242996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.890  [2024-12-09 04:11:15.243440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.243791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  [2024-12-09 04:11:15.245247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  [2024-12-09 04:11:15.245282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  [2024-12-09 04:11:15.245323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  [2024-12-09 04:11:15.245348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812950 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.245461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  [2024-12-09 04:11:15.245507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.245537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set
00:20:46.891  dw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  [2024-12-09 04:11:15.245551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.891  [2024-12-09 04:11:15.245566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.891  [2024-12-09 04:11:15.245580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4310 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.891  [2024-12-09 04:11:15.245631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.245644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.245658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.245685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.245698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-09 04:11:15.245712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with id:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.245729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-09 04:11:15.245743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with id:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.245757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.245772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1c80 is same the state(6) to be set
00:20:46.892  with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.245838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.245852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.245865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.245879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.245892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.245909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set
00:20:46.892  dw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.245924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.245937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.245950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4c90 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.245988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.246021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set
00:20:46.892  id:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.246041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.246067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.246092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.246106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-09 04:11:15.246125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with id:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8130 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.246220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.246248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.246285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.892  [2024-12-09 04:11:15.246326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3e80 is same with the state(6) to be set
00:20:46.892  [2024-12-09 04:11:15.246613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.892  [2024-12-09 04:11:15.246639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.892  [2024-12-09 04:11:15.246683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.892  [2024-12-09 04:11:15.246715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.892  [2024-12-09 04:11:15.246731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.892  [2024-12-09 04:11:15.246745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.246977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.246992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc790 is same with the state(6) to be set
00:20:46.893  [2024-12-09 04:11:15.247396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.893  [2024-12-09 04:11:15.247957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.893  [2024-12-09 04:11:15.247971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.247991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.894  [2024-12-09 04:11:15.248671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.894  [2024-12-09 04:11:15.248714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:46.894  [2024-12-09 04:11:15.248816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.248994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.894  [2024-12-09 04:11:15.249195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with [2024-12-09 04:11:15.249594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12the state(6) to be set
00:20:46.895  8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set
00:20:46.895  [2024-12-09 04:11:15.249676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.249982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.249996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.895  [2024-12-09 04:11:15.250342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.895  [2024-12-09 04:11:15.250357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.250982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.250997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.251010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.251025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.251039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.251054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.251067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.251082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.251099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.251115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.896  [2024-12-09 04:11:15.277547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.896  [2024-12-09 04:11:15.277564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.277578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.277596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.277611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.277644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.277660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.277677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.277691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.277721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.277738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.277768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.277783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.277861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:46.897  [2024-12-09 04:11:15.278348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bda0 is same with the state(6) to be set
00:20:46.897  [2024-12-09 04:11:15.278508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812950 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.278565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810520 is same with the state(6) to be set
00:20:46.897  [2024-12-09 04:11:15.278749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c4310 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.278778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1c80 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.278807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f4c90 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.278859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.278969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.278983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0e60 is same with the state(6) to be set
00:20:46.897  [2024-12-09 04:11:15.279035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.279058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.279073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.279088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.279103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.279117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.279132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:46.897  [2024-12-09 04:11:15.279146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.279159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132c110 is same with the state(6) to be set
00:20:46.897  [2024-12-09 04:11:15.279199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b8130 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.279233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3e80 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.297798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:20:46.897  [2024-12-09 04:11:15.298009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181bda0 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.298057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1810520 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.298099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0e60 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.298139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132c110 (9): Bad file descriptor
00:20:46.897  [2024-12-09 04:11:15.299513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller
00:20:46.897  task offset: 27264 on job bdev=Nvme1n1 fails
00:20:46.897       1741.00 IOPS,   108.81 MiB/s
[2024-12-09T03:11:15.473Z] [2024-12-09 04:11:15.299770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.897  [2024-12-09 04:11:15.299808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c4310 with addr=10.0.0.2, port=4420
00:20:46.897  [2024-12-09 04:11:15.299828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4310 is same with the state(6) to be set
00:20:46.897  [2024-12-09 04:11:15.300261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.300328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.300362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.300403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.300445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.300477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.300509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.897  [2024-12-09 04:11:15.300540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.897  [2024-12-09 04:11:15.300571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.300972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.300987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.898  [2024-12-09 04:11:15.301777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.898  [2024-12-09 04:11:15.301793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.301809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.301823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.301840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.301855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.301872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.301886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.301902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.301917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.301934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.301948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.301964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.301979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.301996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.302363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.302378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.303974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.303989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.899  [2024-12-09 04:11:15.304412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.899  [2024-12-09 04:11:15.304429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.304969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.304984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.900  [2024-12-09 04:11:15.305584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.900  [2024-12-09 04:11:15.305598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.305616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.305630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.305647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.305662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.305679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.305693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.305710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.305724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.305741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.305755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.305772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.305786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c7f00 is same with the state(6) to be set
00:20:46.901  [2024-12-09 04:11:15.307836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.307970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.307987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.901  [2024-12-09 04:11:15.308709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.901  [2024-12-09 04:11:15.308726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.308974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.308988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.902  [2024-12-09 04:11:15.309787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.902  [2024-12-09 04:11:15.309808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.309823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.309843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.309863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.309879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.309893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller
00:20:46.903  [2024-12-09 04:11:15.311171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller
00:20:46.903  [2024-12-09 04:11:15.311344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.903  [2024-12-09 04:11:15.311374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181bda0 with addr=10.0.0.2, port=4420
00:20:46.903  [2024-12-09 04:11:15.311391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bda0 is same with the state(6) to be set
00:20:46.903  [2024-12-09 04:11:15.311418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c4310 (9): Bad file descriptor
00:20:46.903  [2024-12-09 04:11:15.311468] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress.
00:20:46.903  [2024-12-09 04:11:15.311504] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress.
00:20:46.903  [2024-12-09 04:11:15.311535] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress.
00:20:46.903  [2024-12-09 04:11:15.311556] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress.
00:20:46.903  [2024-12-09 04:11:15.311576] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress.
00:20:46.903  [2024-12-09 04:11:15.311595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181bda0 (9): Bad file descriptor
00:20:46.903  [2024-12-09 04:11:15.311666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.311976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.311991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.903  [2024-12-09 04:11:15.312681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.903  [2024-12-09 04:11:15.312698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.312712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.312728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.312743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.312760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.312774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.312790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.312804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.312820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.312835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.312851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.312866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.312882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.312896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.325973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.325994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.326244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.326258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.327680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.327705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.327730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.327746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.327764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.327778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.904  [2024-12-09 04:11:15.327795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.904  [2024-12-09 04:11:15.327815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.327832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.327847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.327863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.327878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.327894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.327909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.327926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.327941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.327957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.327971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.327988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.905  [2024-12-09 04:11:15.328827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.905  [2024-12-09 04:11:15.328843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.328857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.328873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.328887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.328904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.328918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.328934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.328949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.328965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.328981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.329697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.906  [2024-12-09 04:11:15.329711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.906  [2024-12-09 04:11:15.331010] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:20:46.906  [2024-12-09 04:11:15.331960] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:20:46.906  [2024-12-09 04:11:15.332034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller
00:20:46.906  [2024-12-09 04:11:15.332082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller
00:20:46.906  [2024-12-09 04:11:15.332283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.906  [2024-12-09 04:11:15.332324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c1c80 with addr=10.0.0.2, port=4420
00:20:46.906  [2024-12-09 04:11:15.332347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1c80 is same with the state(6) to be set
00:20:46.906  [2024-12-09 04:11:15.332434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.906  [2024-12-09 04:11:15.332459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f4c90 with addr=10.0.0.2, port=4420
00:20:46.906  [2024-12-09 04:11:15.332476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4c90 is same with the state(6) to be set
00:20:46.906  [2024-12-09 04:11:15.332495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:20:46.906  [2024-12-09 04:11:15.332509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:20:46.906  [2024-12-09 04:11:15.332526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:20:46.906  [2024-12-09 04:11:15.332544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:20:46.906  [2024-12-09 04:11:15.332601] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress.
00:20:46.906  [2024-12-09 04:11:15.332626] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress.
00:20:46.906  [2024-12-09 04:11:15.332646] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress.
00:20:46.907  [2024-12-09 04:11:15.332665] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress.
00:20:46.907  [2024-12-09 04:11:15.332696] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress.
00:20:46.907  [2024-12-09 04:11:15.332723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f4c90 (9): Bad file descriptor
00:20:46.907  [2024-12-09 04:11:15.332749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1c80 (9): Bad file descriptor
00:20:46.907  [2024-12-09 04:11:15.333424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.333982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.333997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.907  [2024-12-09 04:11:15.334566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.907  [2024-12-09 04:11:15.334580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.334971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.334985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.335460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.335474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c6c40 is same with the state(6) to be set
00:20:46.908  [2024-12-09 04:11:15.336754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.336778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.336799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.336815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.336832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.336846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.336863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.336883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.908  [2024-12-09 04:11:15.336901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.908  [2024-12-09 04:11:15.336916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.336932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.336946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.336962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.336977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.336994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.337984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.337999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.338015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.338030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.338047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.338061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.338081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.338097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.338114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.909  [2024-12-09 04:11:15.338129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.909  [2024-12-09 04:11:15.338145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:46.910  [2024-12-09 04:11:15.338794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:46.910  [2024-12-09 04:11:15.338809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca480 is same with the state(6) to be set
00:20:46.910  [2024-12-09 04:11:15.340742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller
00:20:46.910  [2024-12-09 04:11:15.340799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller
00:20:46.910  [2024-12-09 04:11:15.340822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller
00:20:46.910  
00:20:46.910                                                                                                  Latency(us)
00:20:46.910  
[2024-12-09T03:11:15.486Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:46.910  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme1n1 ended in about 1.02 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme1n1             :       1.02     187.96      11.75      62.65     0.00  252782.36   19903.53  265639.25
00:20:46.910  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme2n1 ended in about 1.07 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme2n1             :       1.07     179.97      11.25      59.99     0.00  259527.68   20874.43  239230.67
00:20:46.910  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme3n1 ended in about 1.04 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme3n1             :       1.04     184.09      11.51      61.36     0.00  248914.87   18058.81  254765.13
00:20:46.910  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme4n1 ended in about 1.07 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme4n1             :       1.07     183.14      11.45      59.80     0.00  247224.57   18447.17  254765.13
00:20:46.910  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme5n1 ended in about 1.05 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme5n1             :       1.05     187.31      11.71      61.16     0.00  236748.15   19126.80  256318.58
00:20:46.910  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme6n1 ended in about 1.08 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme6n1             :       1.08     178.44      11.15      59.48     0.00  243249.30   22136.60  239230.67
00:20:46.910  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme7n1 ended in about 1.07 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme7n1             :       1.07     179.22      11.20       4.67     0.00  299204.18   16796.63  302921.96
00:20:46.910  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme8n1 ended in about 1.03 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme8n1             :       1.03     185.69      11.61      61.90     0.00  223875.79   20194.80  256318.58
00:20:46.910  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme9n1 ended in about 1.08 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme9n1             :       1.08     118.59       7.41      59.30     0.00  307869.01   37476.88  296708.17
00:20:46.910  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:20:46.910  Job: Nvme10n1 ended in about 1.05 seconds with error
00:20:46.910  	 Verification LBA range: start 0x0 length 0x400
00:20:46.910  	 Nvme10n1            :       1.05     121.85       7.62      60.92     0.00  292327.47   20291.89  284280.60
00:20:46.910  
[2024-12-09T03:11:15.486Z]  ===================================================================================================================
00:20:46.910  
[2024-12-09T03:11:15.487Z]  Total                       :               1706.26     106.64     551.23     0.00  258072.85   16796.63  302921.96
00:20:46.911  [2024-12-09 04:11:15.368781] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:20:46.911  [2024-12-09 04:11:15.368891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller
00:20:46.911  [2024-12-09 04:11:15.369214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.369254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812950 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.369283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812950 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.369380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.369408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c3e80 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.369440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3e80 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.369462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.369477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.369494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.369513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.370509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:20:46.911  [2024-12-09 04:11:15.370682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.370712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b8130 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.370729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8130 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.370829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.370854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132c110 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.370871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132c110 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.370961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c0e60 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.371004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0e60 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.371093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.371120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1810520 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.371136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810520 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.371163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812950 (9): Bad file descriptor
00:20:46.911  [2024-12-09 04:11:15.371185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3e80 (9): Bad file descriptor
00:20:46.911  [2024-12-09 04:11:15.371203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.371217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.371232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.371247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.371264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.371288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.371312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.371326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.371373] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress.
00:20:46.911  [2024-12-09 04:11:15.371405] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress.
00:20:46.911  [2024-12-09 04:11:15.372138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.372169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c4310 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.372186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4310 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.372206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b8130 (9): Bad file descriptor
00:20:46.911  [2024-12-09 04:11:15.372228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132c110 (9): Bad file descriptor
00:20:46.911  [2024-12-09 04:11:15.372247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0e60 (9): Bad file descriptor
00:20:46.911  [2024-12-09 04:11:15.372265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1810520 (9): Bad file descriptor
00:20:46.911  [2024-12-09 04:11:15.372292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.372305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.372319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.372333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.372348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.372361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.372374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.372387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.372480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller
00:20:46.911  [2024-12-09 04:11:15.372506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller
00:20:46.911  [2024-12-09 04:11:15.372524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller
00:20:46.911  [2024-12-09 04:11:15.372565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c4310 (9): Bad file descriptor
00:20:46.911  [2024-12-09 04:11:15.372585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.372599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.372612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.372626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.372641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.372654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.372666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.372679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.372693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.372711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.372726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.372739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.372752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state
00:20:46.911  [2024-12-09 04:11:15.372765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed
00:20:46.911  [2024-12-09 04:11:15.372779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state.
00:20:46.911  [2024-12-09 04:11:15.372792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed.
00:20:46.911  [2024-12-09 04:11:15.372915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.372943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181bda0 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.372960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bda0 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.373034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.911  [2024-12-09 04:11:15.373059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f4c90 with addr=10.0.0.2, port=4420
00:20:46.911  [2024-12-09 04:11:15.373075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4c90 is same with the state(6) to be set
00:20:46.911  [2024-12-09 04:11:15.373156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:20:46.912  [2024-12-09 04:11:15.373182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c1c80 with addr=10.0.0.2, port=4420
00:20:46.912  [2024-12-09 04:11:15.373198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1c80 is same with the state(6) to be set
00:20:46.912  [2024-12-09 04:11:15.373213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:20:46.912  [2024-12-09 04:11:15.373227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:20:46.912  [2024-12-09 04:11:15.373241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:20:46.912  [2024-12-09 04:11:15.373255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:20:46.912  [2024-12-09 04:11:15.373325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181bda0 (9): Bad file descriptor
00:20:46.912  [2024-12-09 04:11:15.373352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f4c90 (9): Bad file descriptor
00:20:46.912  [2024-12-09 04:11:15.373371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1c80 (9): Bad file descriptor
00:20:46.912  [2024-12-09 04:11:15.373413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state
00:20:46.912  [2024-12-09 04:11:15.373432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed
00:20:46.912  [2024-12-09 04:11:15.373447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state.
00:20:46.912  [2024-12-09 04:11:15.373461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed.
00:20:46.912  [2024-12-09 04:11:15.373475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state
00:20:46.912  [2024-12-09 04:11:15.373489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed
00:20:46.912  [2024-12-09 04:11:15.373508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state.
00:20:46.912  [2024-12-09 04:11:15.373521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed.
00:20:46.912  [2024-12-09 04:11:15.373535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state
00:20:46.912  [2024-12-09 04:11:15.373548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed
00:20:46.912  [2024-12-09 04:11:15.373561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:20:46.912  [2024-12-09 04:11:15.373574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed.
00:20:47.476   04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 279205
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 279205
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:48.412    04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 279205
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:20:48.412  rmmod nvme_tcp
00:20:48.412  rmmod nvme_fabrics
00:20:48.412  rmmod nvme_keyring
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 279025 ']'
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 279025
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 279025 ']'
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 279025
00:20:48.412  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (279025) - No such process
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 279025 is not found'
00:20:48.412  Process with pid 279025 is not found
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:20:48.412   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:48.413   04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:48.413    04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:20:50.945  
00:20:50.945  real	0m7.199s
00:20:50.945  user	0m17.076s
00:20:50.945  sys	0m1.453s
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:20:50.945  ************************************
00:20:50.945  END TEST nvmf_shutdown_tc3
00:20:50.945  ************************************
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]]
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]]
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:20:50.945  ************************************
00:20:50.945  START TEST nvmf_shutdown_tc4
00:20:50.945  ************************************
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:50.945   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:50.946    04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=()
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=()
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=()
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=()
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=()
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:20:50.946  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:20:50.946  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:20:50.946  Found net devices under 0000:0a:00.0: cvl_0_0
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:20:50.946  Found net devices under 0000:0a:00.1: cvl_0_1
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:20:50.946   04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:20:50.946  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:50.946  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms
00:20:50.946  
00:20:50.946  --- 10.0.0.2 ping statistics ---
00:20:50.946  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:50.946  rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:20:50.946  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:50.946  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms
00:20:50.946  
00:20:50.946  --- 10.0.0.1 ping statistics ---
00:20:50.946  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:50.946  rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=280107
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 280107
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 280107 ']'
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:50.946  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:50.946   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:50.946  [2024-12-09 04:11:19.272481] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:20:50.946  [2024-12-09 04:11:19.272578] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:50.946  [2024-12-09 04:11:19.356619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:50.946  [2024-12-09 04:11:19.418121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:50.946  [2024-12-09 04:11:19.418172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:50.946  [2024-12-09 04:11:19.418185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:20:50.946  [2024-12-09 04:11:19.418197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:20:50.946  [2024-12-09 04:11:19.418207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:50.946  [2024-12-09 04:11:19.419678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:20:50.946  [2024-12-09 04:11:19.419704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:20:50.947  [2024-12-09 04:11:19.419761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:20:50.947  [2024-12-09 04:11:19.419765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:51.204  [2024-12-09 04:11:19.574391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.204   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.205   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.205   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.205   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:20:51.205   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:20:51.205   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd
00:20:51.205   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:51.205   04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:51.205  Malloc1
00:20:51.205  [2024-12-09 04:11:19.674813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:51.205  Malloc2
00:20:51.205  Malloc3
00:20:51.462  Malloc4
00:20:51.462  Malloc5
00:20:51.462  Malloc6
00:20:51.462  Malloc7
00:20:51.462  Malloc8
00:20:51.720  Malloc9
00:20:51.720  Malloc10
00:20:51.720   04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:51.720   04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:20:51.720   04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:51.720   04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:20:51.720   04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=280280
00:20:51.720   04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4
00:20:51.720   04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5
00:20:51.720  [2024-12-09 04:11:20.212291] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 280107
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 280107 ']'
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 280107
00:20:56.990    04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:56.990    04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280107
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280107'
00:20:56.990  killing process with pid 280107
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 280107
00:20:56.990   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 280107
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  [2024-12-09 04:11:25.201128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  [2024-12-09 04:11:25.202247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.990  starting I/O failed: -6
00:20:56.990  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  [2024-12-09 04:11:25.203651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  [2024-12-09 04:11:25.205048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set
00:20:56.991  [2024-12-09 04:11:25.205097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set
00:20:56.991  [2024-12-09 04:11:25.205120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set
00:20:56.991  [2024-12-09 04:11:25.205134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  [2024-12-09 04:11:25.205146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set
00:20:56.991  starting I/O failed: -6
00:20:56.991  [2024-12-09 04:11:25.205158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  [2024-12-09 04:11:25.205440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:56.991  NVMe io qpair process completion error
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  starting I/O failed: -6
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.991  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.206732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:56.992  [2024-12-09 04:11:25.206871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.206899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.206914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set
00:20:56.992  starting I/O failed: -6
00:20:56.992  [2024-12-09 04:11:25.206926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.206939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with starting I/O failed: -6
00:20:56.992  the state(6) to be set
00:20:56.992  [2024-12-09 04:11:25.206952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with Write completed with error (sct=0, sc=8)
00:20:56.992  the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.207338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with starting I/O failed: -6
00:20:56.992  the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.207374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.207390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set
00:20:56.992  [2024-12-09 04:11:25.207403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.207415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with starting I/O failed: -6
00:20:56.992  the state(6) to be set
00:20:56.992  [2024-12-09 04:11:25.207430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  [2024-12-09 04:11:25.207443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.207456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  [2024-12-09 04:11:25.207856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  Write completed with error (sct=0, sc=8)
00:20:56.992  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  [2024-12-09 04:11:25.209062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  [2024-12-09 04:11:25.210777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:56.993  NVMe io qpair process completion error
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  [2024-12-09 04:11:25.212025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  Write completed with error (sct=0, sc=8)
00:20:56.993  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  [2024-12-09 04:11:25.213148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  [2024-12-09 04:11:25.214254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.994  Write completed with error (sct=0, sc=8)
00:20:56.994  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  [2024-12-09 04:11:25.216130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:56.995  NVMe io qpair process completion error
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  [2024-12-09 04:11:25.217345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  [2024-12-09 04:11:25.218415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.995  Write completed with error (sct=0, sc=8)
00:20:56.995  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  [2024-12-09 04:11:25.219600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  [2024-12-09 04:11:25.221523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:56.996  NVMe io qpair process completion error
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  starting I/O failed: -6
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.996  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  [2024-12-09 04:11:25.222837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  [2024-12-09 04:11:25.223968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  [2024-12-09 04:11:25.225118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.997  starting I/O failed: -6
00:20:56.997  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  [2024-12-09 04:11:25.227557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:56.998  NVMe io qpair process completion error
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  [2024-12-09 04:11:25.228864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  [2024-12-09 04:11:25.229944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  Write completed with error (sct=0, sc=8)
00:20:56.998  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  [2024-12-09 04:11:25.231137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  [2024-12-09 04:11:25.233859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:56.999  NVMe io qpair process completion error
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  starting I/O failed: -6
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:56.999  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  [2024-12-09 04:11:25.235294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  [2024-12-09 04:11:25.236450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  [2024-12-09 04:11:25.237600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.000  Write completed with error (sct=0, sc=8)
00:20:57.000  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  [2024-12-09 04:11:25.240178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:57.001  NVMe io qpair process completion error
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  [2024-12-09 04:11:25.241521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  [2024-12-09 04:11:25.242632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.001  Write completed with error (sct=0, sc=8)
00:20:57.001  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  [2024-12-09 04:11:25.243799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  [2024-12-09 04:11:25.246135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:57.002  NVMe io qpair process completion error
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  starting I/O failed: -6
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.002  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  [2024-12-09 04:11:25.247554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  [2024-12-09 04:11:25.248523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  [2024-12-09 04:11:25.249704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.003  Write completed with error (sct=0, sc=8)
00:20:57.003  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  [2024-12-09 04:11:25.251871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:57.004  NVMe io qpair process completion error
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  [2024-12-09 04:11:25.253192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  starting I/O failed: -6
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.004  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  [2024-12-09 04:11:25.254226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  [2024-12-09 04:11:25.255422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.005  Write completed with error (sct=0, sc=8)
00:20:57.005  starting I/O failed: -6
00:20:57.006  [2024-12-09 04:11:25.257755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:20:57.006  NVMe io qpair process completion error
00:20:57.006  Initializing NVMe Controllers
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8
00:20:57.006  Controller IO queue size 128, less than required.
00:20:57.006  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0
00:20:57.006  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0
00:20:57.006  Initialization complete. Launching workers.
00:20:57.006  ========================================================
00:20:57.006                                                                                                                Latency(us)
00:20:57.006  Device Information                                                        :       IOPS      MiB/s    Average        min        max
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1  from core  0:    1812.10      77.86   70656.56     914.11  125497.87
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1  from core  0:    1828.41      78.56   70065.18     893.49  127074.16
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1  from core  0:    1841.12      79.11   69618.34    1059.90  122389.96
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1  from core  0:    1848.11      79.41   69382.98     941.82  133414.37
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1  from core  0:    1852.77      79.61   69235.45    1090.08  136172.17
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1  from core  0:    1844.72      79.27   68720.96    1012.25  118117.32
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core  0:    1861.66      79.99   68118.11    1179.70  114314.38
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1  from core  0:    1813.59      77.93   69944.17    1033.30  116925.31
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1  from core  0:    1819.73      78.19   69729.30     883.73  117108.05
00:20:57.006  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1  from core  0:    1812.31      77.87   70040.49    1127.46  116425.35
00:20:57.006  ========================================================
00:20:57.006  Total                                                                     :   18334.51     787.81   69545.45     883.73  136172.17
00:20:57.006  
00:20:57.006  [2024-12-09 04:11:25.263890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cae0 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.263986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ad10 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bc50 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a6b0 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a9e0 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144c720 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144c900 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b2c0 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b5f0 is same with the state(6) to be set
00:20:57.006  [2024-12-09 04:11:25.264508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b920 is same with the state(6) to be set
00:20:57.006  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:20:57.264   04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 280280
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 280280
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:58.195    04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 280280
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:58.195   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:20:58.195  rmmod nvme_tcp
00:20:58.195  rmmod nvme_fabrics
00:20:58.454  rmmod nvme_keyring
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 280107 ']'
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 280107
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 280107 ']'
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 280107
00:20:58.454  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (280107) - No such process
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 280107 is not found'
00:20:58.454  Process with pid 280107 is not found
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:58.454   04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:58.454    04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:21:00.361  
00:21:00.361  real	0m9.886s
00:21:00.361  user	0m24.136s
00:21:00.361  sys	0m5.543s
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:21:00.361  ************************************
00:21:00.361  END TEST nvmf_shutdown_tc4
00:21:00.361  ************************************
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT
00:21:00.361  
00:21:00.361  real	0m36.898s
00:21:00.361  user	1m39.336s
00:21:00.361  sys	0m11.792s
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:21:00.361  ************************************
00:21:00.361  END TEST nvmf_shutdown
00:21:00.361  ************************************
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:21:00.361  ************************************
00:21:00.361  START TEST nvmf_nsid
00:21:00.361  ************************************
00:21:00.361   04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp
00:21:00.620  * Looking for test storage...
00:21:00.620  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:21:00.620    04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:00.620     04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version
00:21:00.621     04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-:
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-:
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:00.621  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.621  		--rc genhtml_branch_coverage=1
00:21:00.621  		--rc genhtml_function_coverage=1
00:21:00.621  		--rc genhtml_legend=1
00:21:00.621  		--rc geninfo_all_blocks=1
00:21:00.621  		--rc geninfo_unexecuted_blocks=1
00:21:00.621  		
00:21:00.621  		'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:00.621  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.621  		--rc genhtml_branch_coverage=1
00:21:00.621  		--rc genhtml_function_coverage=1
00:21:00.621  		--rc genhtml_legend=1
00:21:00.621  		--rc geninfo_all_blocks=1
00:21:00.621  		--rc geninfo_unexecuted_blocks=1
00:21:00.621  		
00:21:00.621  		'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:00.621  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.621  		--rc genhtml_branch_coverage=1
00:21:00.621  		--rc genhtml_function_coverage=1
00:21:00.621  		--rc genhtml_legend=1
00:21:00.621  		--rc geninfo_all_blocks=1
00:21:00.621  		--rc geninfo_unexecuted_blocks=1
00:21:00.621  		
00:21:00.621  		'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:00.621  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.621  		--rc genhtml_branch_coverage=1
00:21:00.621  		--rc genhtml_function_coverage=1
00:21:00.621  		--rc genhtml_legend=1
00:21:00.621  		--rc geninfo_all_blocks=1
00:21:00.621  		--rc geninfo_unexecuted_blocks=1
00:21:00.621  		
00:21:00.621  		'
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:00.621     04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:00.621      04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:00.621      04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:00.621      04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:00.621      04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH
00:21:00.621      04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:00.621  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:00.621    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid=
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:00.621   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:00.622    04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable
00:21:00.622   04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=()
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=()
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=()
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=()
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=()
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=()
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=()
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:21:03.156  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:21:03.156  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:21:03.156  Found net devices under 0000:0a:00.0: cvl_0_0
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:21:03.156  Found net devices under 0000:0a:00.1: cvl_0_1
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:21:03.156   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:21:03.156  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:03.156  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms
00:21:03.157  
00:21:03.157  --- 10.0.0.2 ping statistics ---
00:21:03.157  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:03.157  rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:21:03.157  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:03.157  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms
00:21:03.157  
00:21:03.157  --- 10.0.0.1 ping statistics ---
00:21:03.157  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:03.157  rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=282903
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 282903
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 282903 ']'
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:03.157  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:21:03.157  [2024-12-09 04:11:31.365152] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:03.157  [2024-12-09 04:11:31.365230] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:03.157  [2024-12-09 04:11:31.440090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:03.157  [2024-12-09 04:11:31.497280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:03.157  [2024-12-09 04:11:31.497349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:03.157  [2024-12-09 04:11:31.497364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:03.157  [2024-12-09 04:11:31.497375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:03.157  [2024-12-09 04:11:31.497400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:03.157  [2024-12-09 04:11:31.497990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=283048
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=()
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7b65b6aa-8a1e-4e75-a3bb-1fedb5c82920
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=cf3fb3af-eeda-4c24-b17a-cc785633e568
00:21:03.157    04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=51c56393-6ec0-4504-ae50-a8db3fd6141a
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.157   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:21:03.157  null0
00:21:03.157  null1
00:21:03.157  null2
00:21:03.157  [2024-12-09 04:11:31.683365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:03.157  [2024-12-09 04:11:31.698784] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:03.157  [2024-12-09 04:11:31.698852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283048 ]
00:21:03.157  [2024-12-09 04:11:31.707633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 283048 /var/tmp/tgt2.sock
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 283048 ']'
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...'
00:21:03.419  Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:03.419   04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:21:03.419  [2024-12-09 04:11:31.767619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:03.419  [2024-12-09 04:11:31.825891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:03.677   04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:03.677   04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:21:03.677   04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock
00:21:03.935  [2024-12-09 04:11:32.481583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:03.935  [2024-12-09 04:11:32.497806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 ***
00:21:04.194  nvme0n1 nvme0n2
00:21:04.194  nvme1n1
00:21:04.194    04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect
00:21:04.194    04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr
00:21:04.194    04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:04.759    04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme*
00:21:04.759    04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]]
00:21:04.759    04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]]
00:21:04.759    04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0
00:21:04.759    04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0
00:21:04.759   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0
00:21:04.759   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1
00:21:04.759   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:21:04.759   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:21:04.759   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:21:04.760   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']'
00:21:04.760   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1
00:21:04.760   04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7b65b6aa-8a1e-4e75-a3bb-1fedb5c82920
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid
00:21:05.692     04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json
00:21:05.692     04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7b65b6aa8a1e4e75a3bb1fedb5c82920
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7B65B6AA8A1E4E75A3BB1FEDB5C82920
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7B65B6AA8A1E4E75A3BB1FEDB5C82920 == \7\B\6\5\B\6\A\A\8\A\1\E\4\E\7\5\A\3\B\B\1\F\E\D\B\5\C\8\2\9\2\0 ]]
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid cf3fb3af-eeda-4c24-b17a-cc785633e568
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid
00:21:05.692     04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json
00:21:05.692     04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cf3fb3afeeda4c24b17acc785633e568
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CF3FB3AFEEDA4C24B17ACC785633E568
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ CF3FB3AFEEDA4C24B17ACC785633E568 == \C\F\3\F\B\3\A\F\E\E\D\A\4\C\2\4\B\1\7\A\C\C\7\8\5\6\3\3\E\5\6\8 ]]
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3
00:21:05.692   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 51c56393-6ec0-4504-ae50-a8db3fd6141a
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3
00:21:05.692    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid
00:21:05.692     04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json
00:21:05.692     04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:21:05.949    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=51c563936ec04504ae50a8db3fd6141a
00:21:05.949    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 51C563936EC04504AE50A8DB3FD6141A
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 51C563936EC04504AE50A8DB3FD6141A == \5\1\C\5\6\3\9\3\6\E\C\0\4\5\0\4\A\E\5\0\A\8\D\B\3\F\D\6\1\4\1\A ]]
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 283048
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 283048 ']'
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 283048
00:21:05.949    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:05.949    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283048
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283048'
00:21:05.949  killing process with pid 283048
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 283048
00:21:05.949   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 283048
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:06.512  rmmod nvme_tcp
00:21:06.512  rmmod nvme_fabrics
00:21:06.512  rmmod nvme_keyring
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 282903 ']'
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 282903
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 282903 ']'
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 282903
00:21:06.512    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:21:06.512   04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:06.512    04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282903
00:21:06.512   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:06.512   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:06.512   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282903'
00:21:06.512  killing process with pid 282903
00:21:06.512   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 282903
00:21:06.512   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 282903
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:06.769   04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:06.769    04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:09.302   04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:21:09.302  
00:21:09.302  real	0m8.368s
00:21:09.302  user	0m8.267s
00:21:09.302  sys	0m2.629s
00:21:09.302   04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:09.302   04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:21:09.302  ************************************
00:21:09.302  END TEST nvmf_nsid
00:21:09.302  ************************************
00:21:09.302   04:11:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:21:09.302  
00:21:09.302  real	11m47.763s
00:21:09.302  user	27m57.744s
00:21:09.302  sys	2m43.459s
00:21:09.302   04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:09.302   04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:21:09.302  ************************************
00:21:09.302  END TEST nvmf_target_extra
00:21:09.302  ************************************
00:21:09.302   04:11:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp
00:21:09.302   04:11:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:09.302   04:11:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:09.302   04:11:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:21:09.302  ************************************
00:21:09.302  START TEST nvmf_host
00:21:09.302  ************************************
00:21:09.302   04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp
00:21:09.302  * Looking for test storage...
00:21:09.302  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-:
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-:
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<'
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:09.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.302  		--rc genhtml_branch_coverage=1
00:21:09.302  		--rc genhtml_function_coverage=1
00:21:09.302  		--rc genhtml_legend=1
00:21:09.302  		--rc geninfo_all_blocks=1
00:21:09.302  		--rc geninfo_unexecuted_blocks=1
00:21:09.302  		
00:21:09.302  		'
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:09.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.302  		--rc genhtml_branch_coverage=1
00:21:09.302  		--rc genhtml_function_coverage=1
00:21:09.302  		--rc genhtml_legend=1
00:21:09.302  		--rc geninfo_all_blocks=1
00:21:09.302  		--rc geninfo_unexecuted_blocks=1
00:21:09.302  		
00:21:09.302  		'
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:09.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.302  		--rc genhtml_branch_coverage=1
00:21:09.302  		--rc genhtml_function_coverage=1
00:21:09.302  		--rc genhtml_legend=1
00:21:09.302  		--rc geninfo_all_blocks=1
00:21:09.302  		--rc geninfo_unexecuted_blocks=1
00:21:09.302  		
00:21:09.302  		'
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:09.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.302  		--rc genhtml_branch_coverage=1
00:21:09.302  		--rc genhtml_function_coverage=1
00:21:09.302  		--rc genhtml_legend=1
00:21:09.302  		--rc geninfo_all_blocks=1
00:21:09.302  		--rc geninfo_unexecuted_blocks=1
00:21:09.302  		
00:21:09.302  		'
00:21:09.302   04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:09.302     04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:09.302      04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.302      04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.302      04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.302      04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH
00:21:09.302      04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.302    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:09.303  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@")
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]]
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:21:09.303  ************************************
00:21:09.303  START TEST nvmf_multicontroller
00:21:09.303  ************************************
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:21:09.303  * Looking for test storage...
00:21:09.303  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-:
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-:
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:09.303  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.303  		--rc genhtml_branch_coverage=1
00:21:09.303  		--rc genhtml_function_coverage=1
00:21:09.303  		--rc genhtml_legend=1
00:21:09.303  		--rc geninfo_all_blocks=1
00:21:09.303  		--rc geninfo_unexecuted_blocks=1
00:21:09.303  		
00:21:09.303  		'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:09.303  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.303  		--rc genhtml_branch_coverage=1
00:21:09.303  		--rc genhtml_function_coverage=1
00:21:09.303  		--rc genhtml_legend=1
00:21:09.303  		--rc geninfo_all_blocks=1
00:21:09.303  		--rc geninfo_unexecuted_blocks=1
00:21:09.303  		
00:21:09.303  		'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:09.303  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.303  		--rc genhtml_branch_coverage=1
00:21:09.303  		--rc genhtml_function_coverage=1
00:21:09.303  		--rc genhtml_legend=1
00:21:09.303  		--rc geninfo_all_blocks=1
00:21:09.303  		--rc geninfo_unexecuted_blocks=1
00:21:09.303  		
00:21:09.303  		'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:09.303  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:09.303  		--rc genhtml_branch_coverage=1
00:21:09.303  		--rc genhtml_function_coverage=1
00:21:09.303  		--rc genhtml_legend=1
00:21:09.303  		--rc geninfo_all_blocks=1
00:21:09.303  		--rc geninfo_unexecuted_blocks=1
00:21:09.303  		
00:21:09.303  		'
00:21:09.303   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:09.303    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:09.303     04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:09.303      04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.303      04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.303      04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.303      04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH
00:21:09.304      04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:09.304  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']'
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:09.304    04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable
00:21:09.304   04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=()
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=()
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=()
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=()
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=()
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=()
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=()
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:21:11.202   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:21:11.203  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:21:11.203  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:21:11.203  Found net devices under 0000:0a:00.0: cvl_0_0
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:11.203   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:21:11.462  Found net devices under 0000:0a:00.1: cvl_0_1
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:21:11.462  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:11.462  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms
00:21:11.462  
00:21:11.462  --- 10.0.0.2 ping statistics ---
00:21:11.462  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:11.462  rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:21:11.462  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:11.462  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms
00:21:11.462  
00:21:11.462  --- 10.0.0.1 ping statistics ---
00:21:11.462  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:11.462  rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:11.462   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=285481
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 285481
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 285481 ']'
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:11.463  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:11.463   04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.463  [2024-12-09 04:11:39.985364] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:11.463  [2024-12-09 04:11:39.985476] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:11.721  [2024-12-09 04:11:40.062109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:21:11.721  [2024-12-09 04:11:40.123484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:11.721  [2024-12-09 04:11:40.123578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:11.721  [2024-12-09 04:11:40.123592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:11.721  [2024-12-09 04:11:40.123603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:11.721  [2024-12-09 04:11:40.123628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:11.721  [2024-12-09 04:11:40.125293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:21:11.721  [2024-12-09 04:11:40.125323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:21:11.721  [2024-12-09 04:11:40.125327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.721  [2024-12-09 04:11:40.280808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.721   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980  Malloc0
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980  [2024-12-09 04:11:40.343232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980  [2024-12-09 04:11:40.351051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980  Malloc1
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=285511
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 285511 /var/tmp/bdevperf.sock
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 285511 ']'
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:11.980  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:11.980   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.239   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:12.239   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0
00:21:12.239   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1
00:21:12.239   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.239   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.497  NVMe0n1
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:12.497  1
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.497    04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.497  request:
00:21:12.497  {
00:21:12.497  "name": "NVMe0",
00:21:12.497  "trtype": "tcp",
00:21:12.497  "traddr": "10.0.0.2",
00:21:12.497  "adrfam": "ipv4",
00:21:12.497  "trsvcid": "4420",
00:21:12.497  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:21:12.497  "hostnqn": "nqn.2021-09-7.io.spdk:00001",
00:21:12.497  "hostaddr": "10.0.0.1",
00:21:12.497  "prchk_reftag": false,
00:21:12.497  "prchk_guard": false,
00:21:12.497  "hdgst": false,
00:21:12.497  "ddgst": false,
00:21:12.497  "allow_unrecognized_csi": false,
00:21:12.497  "method": "bdev_nvme_attach_controller",
00:21:12.497  "req_id": 1
00:21:12.497  }
00:21:12.497  Got JSON-RPC error response
00:21:12.497  response:
00:21:12.497  {
00:21:12.497  "code": -114,
00:21:12.497  "message": "A controller named NVMe0 already exists with the specified network path"
00:21:12.497  }
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.497    04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.497  request:
00:21:12.497  {
00:21:12.497  "name": "NVMe0",
00:21:12.497  "trtype": "tcp",
00:21:12.497  "traddr": "10.0.0.2",
00:21:12.497  "adrfam": "ipv4",
00:21:12.497  "trsvcid": "4420",
00:21:12.497  "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:21:12.497  "hostaddr": "10.0.0.1",
00:21:12.497  "prchk_reftag": false,
00:21:12.497  "prchk_guard": false,
00:21:12.497  "hdgst": false,
00:21:12.497  "ddgst": false,
00:21:12.497  "allow_unrecognized_csi": false,
00:21:12.497  "method": "bdev_nvme_attach_controller",
00:21:12.497  "req_id": 1
00:21:12.497  }
00:21:12.497  Got JSON-RPC error response
00:21:12.497  response:
00:21:12.497  {
00:21:12.497  "code": -114,
00:21:12.497  "message": "A controller named NVMe0 already exists with the specified network path"
00:21:12.497  }
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:12.497   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.498    04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.498  request:
00:21:12.498  {
00:21:12.498  "name": "NVMe0",
00:21:12.498  "trtype": "tcp",
00:21:12.498  "traddr": "10.0.0.2",
00:21:12.498  "adrfam": "ipv4",
00:21:12.498  "trsvcid": "4420",
00:21:12.498  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:21:12.498  "hostaddr": "10.0.0.1",
00:21:12.498  "prchk_reftag": false,
00:21:12.498  "prchk_guard": false,
00:21:12.498  "hdgst": false,
00:21:12.498  "ddgst": false,
00:21:12.498  "multipath": "disable",
00:21:12.498  "allow_unrecognized_csi": false,
00:21:12.498  "method": "bdev_nvme_attach_controller",
00:21:12.498  "req_id": 1
00:21:12.498  }
00:21:12.498  Got JSON-RPC error response
00:21:12.498  response:
00:21:12.498  {
00:21:12.498  "code": -114,
00:21:12.498  "message": "A controller named NVMe0 already exists and multipath is disabled"
00:21:12.498  }
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.498    04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.498  request:
00:21:12.498  {
00:21:12.498  "name": "NVMe0",
00:21:12.498  "trtype": "tcp",
00:21:12.498  "traddr": "10.0.0.2",
00:21:12.498  "adrfam": "ipv4",
00:21:12.498  "trsvcid": "4420",
00:21:12.498  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:21:12.498  "hostaddr": "10.0.0.1",
00:21:12.498  "prchk_reftag": false,
00:21:12.498  "prchk_guard": false,
00:21:12.498  "hdgst": false,
00:21:12.498  "ddgst": false,
00:21:12.498  "multipath": "failover",
00:21:12.498  "allow_unrecognized_csi": false,
00:21:12.498  "method": "bdev_nvme_attach_controller",
00:21:12.498  "req_id": 1
00:21:12.498  }
00:21:12.498  Got JSON-RPC error response
00:21:12.498  response:
00:21:12.498  {
00:21:12.498  "code": -114,
00:21:12.498  "message": "A controller named NVMe0 already exists with the specified network path"
00:21:12.498  }
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.498   04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.756  NVMe0n1
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.756   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:13.014  
00:21:13.014   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.014    04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:21:13.014    04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe
00:21:13.014    04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.014    04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:13.014    04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.014   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']'
00:21:13.014   04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:21:13.949  {
00:21:13.949    "results": [
00:21:13.949      {
00:21:13.949        "job": "NVMe0n1",
00:21:13.949        "core_mask": "0x1",
00:21:13.949        "workload": "write",
00:21:13.949        "status": "finished",
00:21:13.949        "queue_depth": 128,
00:21:13.949        "io_size": 4096,
00:21:13.949        "runtime": 1.005762,
00:21:13.949        "iops": 18249.84439658687,
00:21:13.949        "mibps": 71.28845467416745,
00:21:13.949        "io_failed": 0,
00:21:13.949        "io_timeout": 0,
00:21:13.949        "avg_latency_us": 7002.143096986389,
00:21:13.949        "min_latency_us": 4514.702222222222,
00:21:13.949        "max_latency_us": 14854.826666666666
00:21:13.949      }
00:21:13.949    ],
00:21:13.949    "core_count": 1
00:21:13.949  }
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]]
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 285511
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 285511 ']'
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 285511
00:21:14.207    04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:14.207    04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285511
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285511'
00:21:14.207  killing process with pid 285511
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 285511
00:21:14.207   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 285511
00:21:14.465   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file
00:21:14.466    04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f
00:21:14.466    04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat
00:21:14.466  --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt ---
00:21:14.466  [2024-12-09 04:11:40.456103] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:14.466  [2024-12-09 04:11:40.456192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285511 ]
00:21:14.466  [2024-12-09 04:11:40.528170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:14.466  [2024-12-09 04:11:40.589917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:14.466  [2024-12-09 04:11:41.384381] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9b04d6a0-267e-41a8-bced-539ae2b96927 already exists
00:21:14.466  [2024-12-09 04:11:41.384423] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:9b04d6a0-267e-41a8-bced-539ae2b96927 alias for bdev NVMe1n1
00:21:14.466  [2024-12-09 04:11:41.384438] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:21:14.466  Running I/O for 1 seconds...
00:21:14.466      18227.00 IOPS,    71.20 MiB/s
00:21:14.466                                                                                                  Latency(us)
00:21:14.466  
[2024-12-09T03:11:43.042Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:14.466  Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:21:14.466  	 NVMe0n1             :       1.01   18249.84      71.29       0.00     0.00    7002.14    4514.70   14854.83
00:21:14.466  
[2024-12-09T03:11:43.042Z]  ===================================================================================================================
00:21:14.466  
[2024-12-09T03:11:43.042Z]  Total                       :              18249.84      71.29       0.00     0.00    7002.14    4514.70   14854.83
00:21:14.466  Received shutdown signal, test time was about 1.000000 seconds
00:21:14.466  
00:21:14.466                                                                                                  Latency(us)
00:21:14.466  
[2024-12-09T03:11:43.042Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:14.466  
[2024-12-09T03:11:43.042Z]  ===================================================================================================================
00:21:14.466  
[2024-12-09T03:11:43.042Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:21:14.466  --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt ---
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:14.466  rmmod nvme_tcp
00:21:14.466  rmmod nvme_fabrics
00:21:14.466  rmmod nvme_keyring
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 285481 ']'
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 285481
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 285481 ']'
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 285481
00:21:14.466    04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:14.466    04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285481
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285481'
00:21:14.466  killing process with pid 285481
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 285481
00:21:14.466   04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 285481
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:14.725   04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:14.725    04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:21:17.257  
00:21:17.257  real	0m7.715s
00:21:17.257  user	0m12.415s
00:21:17.257  sys	0m2.406s
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:21:17.257  ************************************
00:21:17.257  END TEST nvmf_multicontroller
00:21:17.257  ************************************
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:21:17.257  ************************************
00:21:17.257  START TEST nvmf_aer
00:21:17.257  ************************************
00:21:17.257   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp
00:21:17.257  * Looking for test storage...
00:21:17.257  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-:
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-:
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<'
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:17.257     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:17.257    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:17.258  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:17.258  		--rc genhtml_branch_coverage=1
00:21:17.258  		--rc genhtml_function_coverage=1
00:21:17.258  		--rc genhtml_legend=1
00:21:17.258  		--rc geninfo_all_blocks=1
00:21:17.258  		--rc geninfo_unexecuted_blocks=1
00:21:17.258  		
00:21:17.258  		'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:17.258  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:17.258  		--rc genhtml_branch_coverage=1
00:21:17.258  		--rc genhtml_function_coverage=1
00:21:17.258  		--rc genhtml_legend=1
00:21:17.258  		--rc geninfo_all_blocks=1
00:21:17.258  		--rc geninfo_unexecuted_blocks=1
00:21:17.258  		
00:21:17.258  		'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:17.258  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:17.258  		--rc genhtml_branch_coverage=1
00:21:17.258  		--rc genhtml_function_coverage=1
00:21:17.258  		--rc genhtml_legend=1
00:21:17.258  		--rc geninfo_all_blocks=1
00:21:17.258  		--rc geninfo_unexecuted_blocks=1
00:21:17.258  		
00:21:17.258  		'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:17.258  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:17.258  		--rc genhtml_branch_coverage=1
00:21:17.258  		--rc genhtml_function_coverage=1
00:21:17.258  		--rc genhtml_legend=1
00:21:17.258  		--rc geninfo_all_blocks=1
00:21:17.258  		--rc geninfo_unexecuted_blocks=1
00:21:17.258  		
00:21:17.258  		'
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:17.258     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:17.258     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:17.258     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob
00:21:17.258     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:17.258     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:17.258     04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:17.258      04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:17.258      04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:17.258      04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:17.258      04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH
00:21:17.258      04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:17.258  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:17.258    04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable
00:21:17.258   04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=()
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=()
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=()
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=()
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=()
00:21:19.162   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=()
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=()
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:21:19.163  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:21:19.163  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:21:19.163  Found net devices under 0000:0a:00.0: cvl_0_0
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:21:19.163  Found net devices under 0000:0a:00.1: cvl_0_1
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:21:19.163  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:19.163  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms
00:21:19.163  
00:21:19.163  --- 10.0.0.2 ping statistics ---
00:21:19.163  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:19.163  rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:21:19.163  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:19.163  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms
00:21:19.163  
00:21:19.163  --- 10.0.0.1 ping statistics ---
00:21:19.163  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:19.163  rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=287851
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 287851
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 287851 ']'
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:19.163   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:19.164   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:19.164  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:19.164   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:19.164   04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.420  [2024-12-09 04:11:47.775884] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:19.421  [2024-12-09 04:11:47.775973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:19.421  [2024-12-09 04:11:47.847142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:21:19.421  [2024-12-09 04:11:47.904942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:19.421  [2024-12-09 04:11:47.904994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:19.421  [2024-12-09 04:11:47.905022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:19.421  [2024-12-09 04:11:47.905033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:19.421  [2024-12-09 04:11:47.905043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:19.421  [2024-12-09 04:11:47.906687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:19.421  [2024-12-09 04:11:47.906752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:21:19.421  [2024-12-09 04:11:47.906816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:21:19.421  [2024-12-09 04:11:47.906819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.678  [2024-12-09 04:11:48.058213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.678  Malloc0
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.678  [2024-12-09 04:11:48.119887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.678  [
00:21:19.678  {
00:21:19.678  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:21:19.678  "subtype": "Discovery",
00:21:19.678  "listen_addresses": [],
00:21:19.678  "allow_any_host": true,
00:21:19.678  "hosts": []
00:21:19.678  },
00:21:19.678  {
00:21:19.678  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:21:19.678  "subtype": "NVMe",
00:21:19.678  "listen_addresses": [
00:21:19.678  {
00:21:19.678  "trtype": "TCP",
00:21:19.678  "adrfam": "IPv4",
00:21:19.678  "traddr": "10.0.0.2",
00:21:19.678  "trsvcid": "4420"
00:21:19.678  }
00:21:19.678  ],
00:21:19.678  "allow_any_host": true,
00:21:19.678  "hosts": [],
00:21:19.678  "serial_number": "SPDK00000000000001",
00:21:19.678  "model_number": "SPDK bdev Controller",
00:21:19.678  "max_namespaces": 2,
00:21:19.678  "min_cntlid": 1,
00:21:19.678  "max_cntlid": 65519,
00:21:19.678  "namespaces": [
00:21:19.678  {
00:21:19.678  "nsid": 1,
00:21:19.678  "bdev_name": "Malloc0",
00:21:19.678  "name": "Malloc0",
00:21:19.678  "nguid": "C5065B8CEC3C44EF833B44B01434F6A8",
00:21:19.678  "uuid": "c5065b8c-ec3c-44ef-833b-44b01434f6a8"
00:21:19.678  }
00:21:19.678  ]
00:21:19.678  }
00:21:19.678  ]
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=287877
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file
00:21:19.678   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2
00:21:19.679   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.935  Malloc1
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.935   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.935  Asynchronous Event Request test
00:21:19.935  Attaching to 10.0.0.2
00:21:19.935  Attached to 10.0.0.2
00:21:19.935  Registering asynchronous event callbacks...
00:21:19.935  Starting namespace attribute notice tests for all controllers...
00:21:19.935  10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:21:19.935  aer_cb - Changed Namespace
00:21:19.935  Cleaning up...
00:21:19.935  [
00:21:19.935  {
00:21:19.935  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:21:19.936  "subtype": "Discovery",
00:21:19.936  "listen_addresses": [],
00:21:19.936  "allow_any_host": true,
00:21:19.936  "hosts": []
00:21:19.936  },
00:21:19.936  {
00:21:19.936  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:21:19.936  "subtype": "NVMe",
00:21:19.936  "listen_addresses": [
00:21:19.936  {
00:21:19.936  "trtype": "TCP",
00:21:19.936  "adrfam": "IPv4",
00:21:19.936  "traddr": "10.0.0.2",
00:21:19.936  "trsvcid": "4420"
00:21:19.936  }
00:21:19.936  ],
00:21:19.936  "allow_any_host": true,
00:21:19.936  "hosts": [],
00:21:19.936  "serial_number": "SPDK00000000000001",
00:21:19.936  "model_number": "SPDK bdev Controller",
00:21:19.936  "max_namespaces": 2,
00:21:19.936  "min_cntlid": 1,
00:21:19.936  "max_cntlid": 65519,
00:21:19.936  "namespaces": [
00:21:19.936  {
00:21:19.936  "nsid": 1,
00:21:19.936  "bdev_name": "Malloc0",
00:21:19.936  "name": "Malloc0",
00:21:19.936  "nguid": "C5065B8CEC3C44EF833B44B01434F6A8",
00:21:19.936  "uuid": "c5065b8c-ec3c-44ef-833b-44b01434f6a8"
00:21:19.936  },
00:21:19.936  {
00:21:19.936  "nsid": 2,
00:21:19.936  "bdev_name": "Malloc1",
00:21:19.936  "name": "Malloc1",
00:21:19.936  "nguid": "042A60A85DF245F8A39D93DFE4664FAF",
00:21:19.936  "uuid": "042a60a8-5df2-45f8-a39d-93dfe4664faf"
00:21:19.936  }
00:21:19.936  ]
00:21:19.936  }
00:21:19.936  ]
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 287877
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:19.936   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:19.936  rmmod nvme_tcp
00:21:20.193  rmmod nvme_fabrics
00:21:20.193  rmmod nvme_keyring
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 287851 ']'
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 287851
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 287851 ']'
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 287851
00:21:20.193    04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:20.193    04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287851
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287851'
00:21:20.193  killing process with pid 287851
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 287851
00:21:20.193   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 287851
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:20.451   04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:20.451    04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:21:22.358  
00:21:22.358  real	0m5.566s
00:21:22.358  user	0m4.415s
00:21:22.358  sys	0m2.027s
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:21:22.358  ************************************
00:21:22.358  END TEST nvmf_aer
00:21:22.358  ************************************
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:21:22.358  ************************************
00:21:22.358  START TEST nvmf_async_init
00:21:22.358  ************************************
00:21:22.358   04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:21:22.617  * Looking for test storage...
00:21:22.617  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:21:22.617    04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:22.617     04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version
00:21:22.617     04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-:
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-:
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<'
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:22.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:22.617  		--rc genhtml_branch_coverage=1
00:21:22.617  		--rc genhtml_function_coverage=1
00:21:22.617  		--rc genhtml_legend=1
00:21:22.617  		--rc geninfo_all_blocks=1
00:21:22.617  		--rc geninfo_unexecuted_blocks=1
00:21:22.617  		
00:21:22.617  		'
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:22.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:22.617  		--rc genhtml_branch_coverage=1
00:21:22.617  		--rc genhtml_function_coverage=1
00:21:22.617  		--rc genhtml_legend=1
00:21:22.617  		--rc geninfo_all_blocks=1
00:21:22.617  		--rc geninfo_unexecuted_blocks=1
00:21:22.617  		
00:21:22.617  		'
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:22.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:22.617  		--rc genhtml_branch_coverage=1
00:21:22.617  		--rc genhtml_function_coverage=1
00:21:22.617  		--rc genhtml_legend=1
00:21:22.617  		--rc geninfo_all_blocks=1
00:21:22.617  		--rc geninfo_unexecuted_blocks=1
00:21:22.617  		
00:21:22.617  		'
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:22.617  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:22.617  		--rc genhtml_branch_coverage=1
00:21:22.617  		--rc genhtml_function_coverage=1
00:21:22.617  		--rc genhtml_legend=1
00:21:22.617  		--rc geninfo_all_blocks=1
00:21:22.617  		--rc geninfo_unexecuted_blocks=1
00:21:22.617  		
00:21:22.617  		'
00:21:22.617   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:22.617    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:22.617     04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:22.617      04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:22.617      04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:22.618      04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:22.618      04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH
00:21:22.618      04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:22.618  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d -
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6be58f52ce904eaa8c6af269ad0ae0e8
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:22.618    04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable
00:21:22.618   04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=()
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=()
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=()
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=()
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=()
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=()
00:21:25.143   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=()
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:21:25.144  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:21:25.144  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:21:25.144  Found net devices under 0000:0a:00.0: cvl_0_0
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:21:25.144  Found net devices under 0000:0a:00.1: cvl_0_1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:21:25.144  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:25.144  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms
00:21:25.144  
00:21:25.144  --- 10.0.0.2 ping statistics ---
00:21:25.144  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:25.144  rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:21:25.144  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:25.144  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms
00:21:25.144  
00:21:25.144  --- 10.0.0.1 ping statistics ---
00:21:25.144  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:25.144  rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=289940
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 289940
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 289940 ']'
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:25.144  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.144  [2024-12-09 04:11:53.442357] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:25.144  [2024-12-09 04:11:53.442438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:25.144  [2024-12-09 04:11:53.513939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:25.144  [2024-12-09 04:11:53.566472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:25.144  [2024-12-09 04:11:53.566535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:25.144  [2024-12-09 04:11:53.566563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:25.144  [2024-12-09 04:11:53.566573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:25.144  [2024-12-09 04:11:53.566583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:25.144  [2024-12-09 04:11:53.567147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:25.144   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.145  [2024-12-09 04:11:53.706697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.145   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.145  null0
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6be58f52ce904eaa8c6af269ad0ae0e8
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.402  [2024-12-09 04:11:53.747001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.402   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.659  nvme0n1
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.659  [
00:21:25.659  {
00:21:25.659  "name": "nvme0n1",
00:21:25.659  "aliases": [
00:21:25.659  "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8"
00:21:25.659  ],
00:21:25.659  "product_name": "NVMe disk",
00:21:25.659  "block_size": 512,
00:21:25.659  "num_blocks": 2097152,
00:21:25.659  "uuid": "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8",
00:21:25.659  "numa_id": 0,
00:21:25.659  "assigned_rate_limits": {
00:21:25.659  "rw_ios_per_sec": 0,
00:21:25.659  "rw_mbytes_per_sec": 0,
00:21:25.659  "r_mbytes_per_sec": 0,
00:21:25.659  "w_mbytes_per_sec": 0
00:21:25.659  },
00:21:25.659  "claimed": false,
00:21:25.659  "zoned": false,
00:21:25.659  "supported_io_types": {
00:21:25.659  "read": true,
00:21:25.659  "write": true,
00:21:25.659  "unmap": false,
00:21:25.659  "flush": true,
00:21:25.659  "reset": true,
00:21:25.659  "nvme_admin": true,
00:21:25.659  "nvme_io": true,
00:21:25.659  "nvme_io_md": false,
00:21:25.659  "write_zeroes": true,
00:21:25.659  "zcopy": false,
00:21:25.659  "get_zone_info": false,
00:21:25.659  "zone_management": false,
00:21:25.659  "zone_append": false,
00:21:25.659  "compare": true,
00:21:25.659  "compare_and_write": true,
00:21:25.659  "abort": true,
00:21:25.659  "seek_hole": false,
00:21:25.659  "seek_data": false,
00:21:25.659  "copy": true,
00:21:25.659  "nvme_iov_md": false
00:21:25.659  },
00:21:25.659  "memory_domains": [
00:21:25.659  {
00:21:25.659  "dma_device_id": "system",
00:21:25.659  "dma_device_type": 1
00:21:25.659  }
00:21:25.659  ],
00:21:25.659  "driver_specific": {
00:21:25.659  "nvme": [
00:21:25.659  {
00:21:25.659  "trid": {
00:21:25.659  "trtype": "TCP",
00:21:25.659  "adrfam": "IPv4",
00:21:25.659  "traddr": "10.0.0.2",
00:21:25.659  "trsvcid": "4420",
00:21:25.659  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:21:25.659  },
00:21:25.659  "ctrlr_data": {
00:21:25.659  "cntlid": 1,
00:21:25.659  "vendor_id": "0x8086",
00:21:25.659  "model_number": "SPDK bdev Controller",
00:21:25.659  "serial_number": "00000000000000000000",
00:21:25.659  "firmware_revision": "25.01",
00:21:25.659  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:21:25.659  "oacs": {
00:21:25.659  "security": 0,
00:21:25.659  "format": 0,
00:21:25.659  "firmware": 0,
00:21:25.659  "ns_manage": 0
00:21:25.659  },
00:21:25.659  "multi_ctrlr": true,
00:21:25.659  "ana_reporting": false
00:21:25.659  },
00:21:25.659  "vs": {
00:21:25.659  "nvme_version": "1.3"
00:21:25.659  },
00:21:25.659  "ns_data": {
00:21:25.659  "id": 1,
00:21:25.659  "can_share": true
00:21:25.659  }
00:21:25.659  }
00:21:25.659  ],
00:21:25.659  "mp_policy": "active_passive"
00:21:25.659  }
00:21:25.659  }
00:21:25.659  ]
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.659   04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.659  [2024-12-09 04:11:53.996161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:21:25.659  [2024-12-09 04:11:53.996249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f9740 (9): Bad file descriptor
00:21:25.659  [2024-12-09 04:11:54.128393] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.659  [
00:21:25.659  {
00:21:25.659  "name": "nvme0n1",
00:21:25.659  "aliases": [
00:21:25.659  "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8"
00:21:25.659  ],
00:21:25.659  "product_name": "NVMe disk",
00:21:25.659  "block_size": 512,
00:21:25.659  "num_blocks": 2097152,
00:21:25.659  "uuid": "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8",
00:21:25.659  "numa_id": 0,
00:21:25.659  "assigned_rate_limits": {
00:21:25.659  "rw_ios_per_sec": 0,
00:21:25.659  "rw_mbytes_per_sec": 0,
00:21:25.659  "r_mbytes_per_sec": 0,
00:21:25.659  "w_mbytes_per_sec": 0
00:21:25.659  },
00:21:25.659  "claimed": false,
00:21:25.659  "zoned": false,
00:21:25.659  "supported_io_types": {
00:21:25.659  "read": true,
00:21:25.659  "write": true,
00:21:25.659  "unmap": false,
00:21:25.659  "flush": true,
00:21:25.659  "reset": true,
00:21:25.659  "nvme_admin": true,
00:21:25.659  "nvme_io": true,
00:21:25.659  "nvme_io_md": false,
00:21:25.659  "write_zeroes": true,
00:21:25.659  "zcopy": false,
00:21:25.659  "get_zone_info": false,
00:21:25.659  "zone_management": false,
00:21:25.659  "zone_append": false,
00:21:25.659  "compare": true,
00:21:25.659  "compare_and_write": true,
00:21:25.659  "abort": true,
00:21:25.659  "seek_hole": false,
00:21:25.659  "seek_data": false,
00:21:25.659  "copy": true,
00:21:25.659  "nvme_iov_md": false
00:21:25.659  },
00:21:25.659  "memory_domains": [
00:21:25.659  {
00:21:25.659  "dma_device_id": "system",
00:21:25.659  "dma_device_type": 1
00:21:25.659  }
00:21:25.659  ],
00:21:25.659  "driver_specific": {
00:21:25.659  "nvme": [
00:21:25.659  {
00:21:25.659  "trid": {
00:21:25.659  "trtype": "TCP",
00:21:25.659  "adrfam": "IPv4",
00:21:25.659  "traddr": "10.0.0.2",
00:21:25.659  "trsvcid": "4420",
00:21:25.659  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:21:25.659  },
00:21:25.659  "ctrlr_data": {
00:21:25.659  "cntlid": 2,
00:21:25.659  "vendor_id": "0x8086",
00:21:25.659  "model_number": "SPDK bdev Controller",
00:21:25.659  "serial_number": "00000000000000000000",
00:21:25.659  "firmware_revision": "25.01",
00:21:25.659  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:21:25.659  "oacs": {
00:21:25.659  "security": 0,
00:21:25.659  "format": 0,
00:21:25.659  "firmware": 0,
00:21:25.659  "ns_manage": 0
00:21:25.659  },
00:21:25.659  "multi_ctrlr": true,
00:21:25.659  "ana_reporting": false
00:21:25.659  },
00:21:25.659  "vs": {
00:21:25.659  "nvme_version": "1.3"
00:21:25.659  },
00:21:25.659  "ns_data": {
00:21:25.659  "id": 1,
00:21:25.659  "can_share": true
00:21:25.659  }
00:21:25.659  }
00:21:25.659  ],
00:21:25.659  "mp_policy": "active_passive"
00:21:25.659  }
00:21:25.659  }
00:21:25.659  ]
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.659   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.659    04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yvb6E3Wj1y
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yvb6E3Wj1y
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.yvb6E3Wj1y
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.660  [2024-12-09 04:11:54.180756] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:21:25.660  [2024-12-09 04:11:54.180917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.660   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.660  [2024-12-09 04:11:54.196797] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:21:25.917  nvme0n1
00:21:25.917   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.917   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:21:25.917   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.917   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.917  [
00:21:25.917  {
00:21:25.917  "name": "nvme0n1",
00:21:25.917  "aliases": [
00:21:25.917  "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8"
00:21:25.917  ],
00:21:25.917  "product_name": "NVMe disk",
00:21:25.917  "block_size": 512,
00:21:25.917  "num_blocks": 2097152,
00:21:25.917  "uuid": "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8",
00:21:25.917  "numa_id": 0,
00:21:25.917  "assigned_rate_limits": {
00:21:25.917  "rw_ios_per_sec": 0,
00:21:25.917  "rw_mbytes_per_sec": 0,
00:21:25.917  "r_mbytes_per_sec": 0,
00:21:25.917  "w_mbytes_per_sec": 0
00:21:25.917  },
00:21:25.917  "claimed": false,
00:21:25.917  "zoned": false,
00:21:25.917  "supported_io_types": {
00:21:25.917  "read": true,
00:21:25.917  "write": true,
00:21:25.917  "unmap": false,
00:21:25.917  "flush": true,
00:21:25.917  "reset": true,
00:21:25.917  "nvme_admin": true,
00:21:25.917  "nvme_io": true,
00:21:25.917  "nvme_io_md": false,
00:21:25.917  "write_zeroes": true,
00:21:25.917  "zcopy": false,
00:21:25.917  "get_zone_info": false,
00:21:25.917  "zone_management": false,
00:21:25.917  "zone_append": false,
00:21:25.917  "compare": true,
00:21:25.917  "compare_and_write": true,
00:21:25.917  "abort": true,
00:21:25.917  "seek_hole": false,
00:21:25.917  "seek_data": false,
00:21:25.917  "copy": true,
00:21:25.917  "nvme_iov_md": false
00:21:25.917  },
00:21:25.917  "memory_domains": [
00:21:25.917  {
00:21:25.917  "dma_device_id": "system",
00:21:25.917  "dma_device_type": 1
00:21:25.917  }
00:21:25.917  ],
00:21:25.917  "driver_specific": {
00:21:25.917  "nvme": [
00:21:25.917  {
00:21:25.917  "trid": {
00:21:25.917  "trtype": "TCP",
00:21:25.917  "adrfam": "IPv4",
00:21:25.917  "traddr": "10.0.0.2",
00:21:25.917  "trsvcid": "4421",
00:21:25.917  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:21:25.917  },
00:21:25.917  "ctrlr_data": {
00:21:25.917  "cntlid": 3,
00:21:25.917  "vendor_id": "0x8086",
00:21:25.917  "model_number": "SPDK bdev Controller",
00:21:25.917  "serial_number": "00000000000000000000",
00:21:25.917  "firmware_revision": "25.01",
00:21:25.917  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:21:25.917  "oacs": {
00:21:25.917  "security": 0,
00:21:25.917  "format": 0,
00:21:25.917  "firmware": 0,
00:21:25.917  "ns_manage": 0
00:21:25.917  },
00:21:25.918  "multi_ctrlr": true,
00:21:25.918  "ana_reporting": false
00:21:25.918  },
00:21:25.918  "vs": {
00:21:25.918  "nvme_version": "1.3"
00:21:25.918  },
00:21:25.918  "ns_data": {
00:21:25.918  "id": 1,
00:21:25.918  "can_share": true
00:21:25.918  }
00:21:25.918  }
00:21:25.918  ],
00:21:25.918  "mp_policy": "active_passive"
00:21:25.918  }
00:21:25.918  }
00:21:25.918  ]
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.yvb6E3Wj1y
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:25.918  rmmod nvme_tcp
00:21:25.918  rmmod nvme_fabrics
00:21:25.918  rmmod nvme_keyring
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 289940 ']'
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 289940
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 289940 ']'
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 289940
00:21:25.918    04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:25.918    04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289940
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289940'
00:21:25.918  killing process with pid 289940
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 289940
00:21:25.918   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 289940
00:21:26.176   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:26.177   04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:26.177    04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:28.078   04:11:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:21:28.078  
00:21:28.078  real	0m5.702s
00:21:28.078  user	0m2.153s
00:21:28.078  sys	0m1.967s
00:21:28.078   04:11:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:28.078   04:11:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:21:28.078  ************************************
00:21:28.078  END TEST nvmf_async_init
00:21:28.078  ************************************
00:21:28.078   04:11:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp
00:21:28.078   04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:28.078   04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:28.078   04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:21:28.336  ************************************
00:21:28.336  START TEST dma
00:21:28.336  ************************************
00:21:28.336   04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp
00:21:28.336  * Looking for test storage...
00:21:28.336  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:28.336     04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version
00:21:28.336     04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-:
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-:
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<'
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2
00:21:28.336    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:28.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.337  		--rc genhtml_branch_coverage=1
00:21:28.337  		--rc genhtml_function_coverage=1
00:21:28.337  		--rc genhtml_legend=1
00:21:28.337  		--rc geninfo_all_blocks=1
00:21:28.337  		--rc geninfo_unexecuted_blocks=1
00:21:28.337  		
00:21:28.337  		'
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:28.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.337  		--rc genhtml_branch_coverage=1
00:21:28.337  		--rc genhtml_function_coverage=1
00:21:28.337  		--rc genhtml_legend=1
00:21:28.337  		--rc geninfo_all_blocks=1
00:21:28.337  		--rc geninfo_unexecuted_blocks=1
00:21:28.337  		
00:21:28.337  		'
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:28.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.337  		--rc genhtml_branch_coverage=1
00:21:28.337  		--rc genhtml_function_coverage=1
00:21:28.337  		--rc genhtml_legend=1
00:21:28.337  		--rc geninfo_all_blocks=1
00:21:28.337  		--rc geninfo_unexecuted_blocks=1
00:21:28.337  		
00:21:28.337  		'
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:28.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.337  		--rc genhtml_branch_coverage=1
00:21:28.337  		--rc genhtml_function_coverage=1
00:21:28.337  		--rc genhtml_legend=1
00:21:28.337  		--rc geninfo_all_blocks=1
00:21:28.337  		--rc geninfo_unexecuted_blocks=1
00:21:28.337  		
00:21:28.337  		'
00:21:28.337   04:11:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:28.337     04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:28.337      04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.337      04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.337      04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.337      04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH
00:21:28.337      04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:28.337    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:28.337  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:28.338    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:28.338    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:28.338    04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']'
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0
00:21:28.338  
00:21:28.338  real	0m0.170s
00:21:28.338  user	0m0.112s
00:21:28.338  sys	0m0.067s
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:21:28.338  ************************************
00:21:28.338  END TEST dma
00:21:28.338  ************************************
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:21:28.338  ************************************
00:21:28.338  START TEST nvmf_identify
00:21:28.338  ************************************
00:21:28.338   04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp
00:21:28.596  * Looking for test storage...
00:21:28.596  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:21:28.596    04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:28.596     04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version
00:21:28.596     04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-:
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-:
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<'
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:28.596     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:28.596    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:28.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.597  		--rc genhtml_branch_coverage=1
00:21:28.597  		--rc genhtml_function_coverage=1
00:21:28.597  		--rc genhtml_legend=1
00:21:28.597  		--rc geninfo_all_blocks=1
00:21:28.597  		--rc geninfo_unexecuted_blocks=1
00:21:28.597  		
00:21:28.597  		'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:28.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.597  		--rc genhtml_branch_coverage=1
00:21:28.597  		--rc genhtml_function_coverage=1
00:21:28.597  		--rc genhtml_legend=1
00:21:28.597  		--rc geninfo_all_blocks=1
00:21:28.597  		--rc geninfo_unexecuted_blocks=1
00:21:28.597  		
00:21:28.597  		'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:28.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.597  		--rc genhtml_branch_coverage=1
00:21:28.597  		--rc genhtml_function_coverage=1
00:21:28.597  		--rc genhtml_legend=1
00:21:28.597  		--rc geninfo_all_blocks=1
00:21:28.597  		--rc geninfo_unexecuted_blocks=1
00:21:28.597  		
00:21:28.597  		'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:28.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:28.597  		--rc genhtml_branch_coverage=1
00:21:28.597  		--rc genhtml_function_coverage=1
00:21:28.597  		--rc genhtml_legend=1
00:21:28.597  		--rc geninfo_all_blocks=1
00:21:28.597  		--rc geninfo_unexecuted_blocks=1
00:21:28.597  		
00:21:28.597  		'
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:28.597     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:28.597     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:28.597     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob
00:21:28.597     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:28.597     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:28.597     04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:28.597      04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.597      04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.597      04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.597      04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH
00:21:28.597      04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:28.597  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:28.597    04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable
00:21:28.597   04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=()
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=()
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=()
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=()
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=()
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=()
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=()
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:21:31.123  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:21:31.123  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:31.123   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:21:31.124  Found net devices under 0000:0a:00.0: cvl_0_0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:21:31.124  Found net devices under 0000:0a:00.1: cvl_0_1
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:21:31.124  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:31.124  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms
00:21:31.124  
00:21:31.124  --- 10.0.0.2 ping statistics ---
00:21:31.124  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:31.124  rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:21:31.124  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:31.124  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms
00:21:31.124  
00:21:31.124  --- 10.0.0.1 ping statistics ---
00:21:31.124  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:31.124  rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=292082
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 292082
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 292082 ']'
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:31.124  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.124  [2024-12-09 04:11:59.396963] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:31.124  [2024-12-09 04:11:59.397050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:31.124  [2024-12-09 04:11:59.468069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:21:31.124  [2024-12-09 04:11:59.527063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:31.124  [2024-12-09 04:11:59.527112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:31.124  [2024-12-09 04:11:59.527140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:31.124  [2024-12-09 04:11:59.527151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:31.124  [2024-12-09 04:11:59.527161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:31.124  [2024-12-09 04:11:59.528691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:31.124  [2024-12-09 04:11:59.528755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:21:31.124  [2024-12-09 04:11:59.528823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:21:31.124  [2024-12-09 04:11:59.528826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.124  [2024-12-09 04:11:59.652205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.124   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.384  Malloc0
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.384  [2024-12-09 04:11:59.738499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.384  [
00:21:31.384  {
00:21:31.384  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:21:31.384  "subtype": "Discovery",
00:21:31.384  "listen_addresses": [
00:21:31.384  {
00:21:31.384  "trtype": "TCP",
00:21:31.384  "adrfam": "IPv4",
00:21:31.384  "traddr": "10.0.0.2",
00:21:31.384  "trsvcid": "4420"
00:21:31.384  }
00:21:31.384  ],
00:21:31.384  "allow_any_host": true,
00:21:31.384  "hosts": []
00:21:31.384  },
00:21:31.384  {
00:21:31.384  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:21:31.384  "subtype": "NVMe",
00:21:31.384  "listen_addresses": [
00:21:31.384  {
00:21:31.384  "trtype": "TCP",
00:21:31.384  "adrfam": "IPv4",
00:21:31.384  "traddr": "10.0.0.2",
00:21:31.384  "trsvcid": "4420"
00:21:31.384  }
00:21:31.384  ],
00:21:31.384  "allow_any_host": true,
00:21:31.384  "hosts": [],
00:21:31.384  "serial_number": "SPDK00000000000001",
00:21:31.384  "model_number": "SPDK bdev Controller",
00:21:31.384  "max_namespaces": 32,
00:21:31.384  "min_cntlid": 1,
00:21:31.384  "max_cntlid": 65519,
00:21:31.384  "namespaces": [
00:21:31.384  {
00:21:31.384  "nsid": 1,
00:21:31.384  "bdev_name": "Malloc0",
00:21:31.384  "name": "Malloc0",
00:21:31.384  "nguid": "ABCDEF0123456789ABCDEF0123456789",
00:21:31.384  "eui64": "ABCDEF0123456789",
00:21:31.384  "uuid": "c1f55b3a-c777-461f-8bb4-17aef1175c5a"
00:21:31.384  }
00:21:31.384  ]
00:21:31.384  }
00:21:31.384  ]
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.384   04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all
00:21:31.384  [2024-12-09 04:11:59.779885] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:31.384  [2024-12-09 04:11:59.779929] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292112 ]
00:21:31.384  [2024-12-09 04:11:59.832947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout)
00:21:31.384  [2024-12-09 04:11:59.833017] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:21:31.384  [2024-12-09 04:11:59.833028] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:21:31.384  [2024-12-09 04:11:59.833052] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:21:31.384  [2024-12-09 04:11:59.833067] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:21:31.384  [2024-12-09 04:11:59.836732] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout)
00:21:31.384  [2024-12-09 04:11:59.836806] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2231690 0
00:21:31.384  [2024-12-09 04:11:59.836945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:21:31.385  [2024-12-09 04:11:59.836962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:21:31.385  [2024-12-09 04:11:59.836977] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:21:31.385  [2024-12-09 04:11:59.836984] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:21:31.385  [2024-12-09 04:11:59.837030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.837042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.837049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.837066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:21:31.385  [2024-12-09 04:11:59.837092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.844286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.385  [2024-12-09 04:11:59.844305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.385  [2024-12-09 04:11:59.844313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.385  [2024-12-09 04:11:59.844336] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:21:31.385  [2024-12-09 04:11:59.844348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout)
00:21:31.385  [2024-12-09 04:11:59.844359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout)
00:21:31.385  [2024-12-09 04:11:59.844383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.844415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.385  [2024-12-09 04:11:59.844441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.844583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.385  [2024-12-09 04:11:59.844597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.385  [2024-12-09 04:11:59.844604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.385  [2024-12-09 04:11:59.844625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout)
00:21:31.385  [2024-12-09 04:11:59.844640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout)
00:21:31.385  [2024-12-09 04:11:59.844652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.844677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.385  [2024-12-09 04:11:59.844699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.844776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.385  [2024-12-09 04:11:59.844789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.385  [2024-12-09 04:11:59.844796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.385  [2024-12-09 04:11:59.844811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout)
00:21:31.385  [2024-12-09 04:11:59.844825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms)
00:21:31.385  [2024-12-09 04:11:59.844837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.844862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.385  [2024-12-09 04:11:59.844882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.844960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.385  [2024-12-09 04:11:59.844973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.385  [2024-12-09 04:11:59.844980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.844987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.385  [2024-12-09 04:11:59.844995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:21:31.385  [2024-12-09 04:11:59.845012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.845038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.385  [2024-12-09 04:11:59.845064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.845135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.385  [2024-12-09 04:11:59.845148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.385  [2024-12-09 04:11:59.845155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.385  [2024-12-09 04:11:59.845169] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0
00:21:31.385  [2024-12-09 04:11:59.845178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms)
00:21:31.385  [2024-12-09 04:11:59.845191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:21:31.385  [2024-12-09 04:11:59.845301] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1
00:21:31.385  [2024-12-09 04:11:59.845311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:21:31.385  [2024-12-09 04:11:59.845326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.845350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.385  [2024-12-09 04:11:59.845372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.845489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.385  [2024-12-09 04:11:59.845503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.385  [2024-12-09 04:11:59.845510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.385  [2024-12-09 04:11:59.845525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:21:31.385  [2024-12-09 04:11:59.845542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.845568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.385  [2024-12-09 04:11:59.845589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.845665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.385  [2024-12-09 04:11:59.845677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.385  [2024-12-09 04:11:59.845684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.385  [2024-12-09 04:11:59.845698] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:21:31.385  [2024-12-09 04:11:59.845707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms)
00:21:31.385  [2024-12-09 04:11:59.845719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout)
00:21:31.385  [2024-12-09 04:11:59.845740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms)
00:21:31.385  [2024-12-09 04:11:59.845757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.385  [2024-12-09 04:11:59.845776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.385  [2024-12-09 04:11:59.845797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.385  [2024-12-09 04:11:59.845925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.385  [2024-12-09 04:11:59.845940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.385  [2024-12-09 04:11:59.845947] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845954] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=4096, cccid=0
00:21:31.385  [2024-12-09 04:11:59.845962] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293100) on tqpair(0x2231690): expected_datao=0, payload_size=4096
00:21:31.385  [2024-12-09 04:11:59.845970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845987] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.385  [2024-12-09 04:11:59.845997] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.386  [2024-12-09 04:11:59.887393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.386  [2024-12-09 04:11:59.887401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.386  [2024-12-09 04:11:59.887427] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295
00:21:31.386  [2024-12-09 04:11:59.887438] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072
00:21:31.386  [2024-12-09 04:11:59.887446] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001
00:21:31.386  [2024-12-09 04:11:59.887455] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16
00:21:31.386  [2024-12-09 04:11:59.887462] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1
00:21:31.386  [2024-12-09 04:11:59.887471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms)
00:21:31.386  [2024-12-09 04:11:59.887485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms)
00:21:31.386  [2024-12-09 04:11:59.887498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.887525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:21:31.386  [2024-12-09 04:11:59.887549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.386  [2024-12-09 04:11:59.887635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.386  [2024-12-09 04:11:59.887649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.386  [2024-12-09 04:11:59.887656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.386  [2024-12-09 04:11:59.887680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.887705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.386  [2024-12-09 04:11:59.887715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.887737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.386  [2024-12-09 04:11:59.887747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.887768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.386  [2024-12-09 04:11:59.887778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.887799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.386  [2024-12-09 04:11:59.887808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:21:31.386  [2024-12-09 04:11:59.887828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:21:31.386  [2024-12-09 04:11:59.887842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.887850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.887860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.386  [2024-12-09 04:11:59.887884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0
00:21:31.386  [2024-12-09 04:11:59.887895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293280, cid 1, qid 0
00:21:31.386  [2024-12-09 04:11:59.887903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293400, cid 2, qid 0
00:21:31.386  [2024-12-09 04:11:59.887911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.386  [2024-12-09 04:11:59.887919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0
00:21:31.386  [2024-12-09 04:11:59.888057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.386  [2024-12-09 04:11:59.888070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.386  [2024-12-09 04:11:59.888077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.888084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690
00:21:31.386  [2024-12-09 04:11:59.888092] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us
00:21:31.386  [2024-12-09 04:11:59.888101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout)
00:21:31.386  [2024-12-09 04:11:59.888119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.888133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.888144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.386  [2024-12-09 04:11:59.888166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0
00:21:31.386  [2024-12-09 04:11:59.888266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.386  [2024-12-09 04:11:59.892291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.386  [2024-12-09 04:11:59.892300] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892306] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=4096, cccid=4
00:21:31.386  [2024-12-09 04:11:59.892314] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=4096
00:21:31.386  [2024-12-09 04:11:59.892322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892332] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892339] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.386  [2024-12-09 04:11:59.892361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.386  [2024-12-09 04:11:59.892368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690
00:21:31.386  [2024-12-09 04:11:59.892394] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state
00:21:31.386  [2024-12-09 04:11:59.892434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.892456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.386  [2024-12-09 04:11:59.892468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.892491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.386  [2024-12-09 04:11:59.892518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0
00:21:31.386  [2024-12-09 04:11:59.892530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293880, cid 5, qid 0
00:21:31.386  [2024-12-09 04:11:59.892715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.386  [2024-12-09 04:11:59.892731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.386  [2024-12-09 04:11:59.892738] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=1024, cccid=4
00:21:31.386  [2024-12-09 04:11:59.892752] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=1024
00:21:31.386  [2024-12-09 04:11:59.892759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892769] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892776] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.386  [2024-12-09 04:11:59.892793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.386  [2024-12-09 04:11:59.892804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.892811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293880) on tqpair=0x2231690
00:21:31.386  [2024-12-09 04:11:59.938289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.386  [2024-12-09 04:11:59.938307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.386  [2024-12-09 04:11:59.938315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.938322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690
00:21:31.386  [2024-12-09 04:11:59.938340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.386  [2024-12-09 04:11:59.938350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690)
00:21:31.386  [2024-12-09 04:11:59.938362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.386  [2024-12-09 04:11:59.938392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0
00:21:31.386  [2024-12-09 04:11:59.938530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.386  [2024-12-09 04:11:59.938543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.387  [2024-12-09 04:11:59.938550] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.387  [2024-12-09 04:11:59.938556] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=3072, cccid=4
00:21:31.387  [2024-12-09 04:11:59.938564] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=3072
00:21:31.387  [2024-12-09 04:11:59.938571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.387  [2024-12-09 04:11:59.938591] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.387  [2024-12-09 04:11:59.938600] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.649  [2024-12-09 04:11:59.979384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.649  [2024-12-09 04:11:59.979403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.649  [2024-12-09 04:11:59.979411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.650  [2024-12-09 04:11:59.979418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690
00:21:31.650  [2024-12-09 04:11:59.979434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.650  [2024-12-09 04:11:59.979444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690)
00:21:31.650  [2024-12-09 04:11:59.979456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.650  [2024-12-09 04:11:59.979485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0
00:21:31.650  [2024-12-09 04:11:59.979583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.650  [2024-12-09 04:11:59.979595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.650  [2024-12-09 04:11:59.979603] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.650  [2024-12-09 04:11:59.979609] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=8, cccid=4
00:21:31.650  [2024-12-09 04:11:59.979617] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=8
00:21:31.650  [2024-12-09 04:11:59.979624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.650  [2024-12-09 04:11:59.979634] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.650  [2024-12-09 04:11:59.979641] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.650  [2024-12-09 04:12:00.024292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.650  [2024-12-09 04:12:00.024331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.650  [2024-12-09 04:12:00.024339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.650  [2024-12-09 04:12:00.024354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690
00:21:31.650  =====================================================
00:21:31.650  NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery
00:21:31.650  =====================================================
00:21:31.650  Controller Capabilities/Features
00:21:31.650  ================================
00:21:31.650  Vendor ID:                             0000
00:21:31.650  Subsystem Vendor ID:                   0000
00:21:31.650  Serial Number:                         ....................
00:21:31.650  Model Number:                          ........................................
00:21:31.650  Firmware Version:                      25.01
00:21:31.650  Recommended Arb Burst:                 0
00:21:31.650  IEEE OUI Identifier:                   00 00 00
00:21:31.650  Multi-path I/O
00:21:31.650    May have multiple subsystem ports:   No
00:21:31.650    May have multiple controllers:       No
00:21:31.650    Associated with SR-IOV VF:           No
00:21:31.650  Max Data Transfer Size:                131072
00:21:31.650  Max Number of Namespaces:              0
00:21:31.650  Max Number of I/O Queues:              1024
00:21:31.650  NVMe Specification Version (VS):       1.3
00:21:31.650  NVMe Specification Version (Identify): 1.3
00:21:31.650  Maximum Queue Entries:                 128
00:21:31.650  Contiguous Queues Required:            Yes
00:21:31.650  Arbitration Mechanisms Supported
00:21:31.650    Weighted Round Robin:                Not Supported
00:21:31.650    Vendor Specific:                     Not Supported
00:21:31.650  Reset Timeout:                         15000 ms
00:21:31.650  Doorbell Stride:                       4 bytes
00:21:31.650  NVM Subsystem Reset:                   Not Supported
00:21:31.650  Command Sets Supported
00:21:31.650    NVM Command Set:                     Supported
00:21:31.650  Boot Partition:                        Not Supported
00:21:31.650  Memory Page Size Minimum:              4096 bytes
00:21:31.650  Memory Page Size Maximum:              4096 bytes
00:21:31.650  Persistent Memory Region:              Not Supported
00:21:31.650  Optional Asynchronous Events Supported
00:21:31.650    Namespace Attribute Notices:         Not Supported
00:21:31.650    Firmware Activation Notices:         Not Supported
00:21:31.650    ANA Change Notices:                  Not Supported
00:21:31.650    PLE Aggregate Log Change Notices:    Not Supported
00:21:31.650    LBA Status Info Alert Notices:       Not Supported
00:21:31.650    EGE Aggregate Log Change Notices:    Not Supported
00:21:31.650    Normal NVM Subsystem Shutdown event: Not Supported
00:21:31.650    Zone Descriptor Change Notices:      Not Supported
00:21:31.650    Discovery Log Change Notices:        Supported
00:21:31.650  Controller Attributes
00:21:31.650    128-bit Host Identifier:             Not Supported
00:21:31.650    Non-Operational Permissive Mode:     Not Supported
00:21:31.650    NVM Sets:                            Not Supported
00:21:31.650    Read Recovery Levels:                Not Supported
00:21:31.650    Endurance Groups:                    Not Supported
00:21:31.650    Predictable Latency Mode:            Not Supported
00:21:31.650    Traffic Based Keep ALive:            Not Supported
00:21:31.650    Namespace Granularity:               Not Supported
00:21:31.650    SQ Associations:                     Not Supported
00:21:31.650    UUID List:                           Not Supported
00:21:31.650    Multi-Domain Subsystem:              Not Supported
00:21:31.650    Fixed Capacity Management:           Not Supported
00:21:31.650    Variable Capacity Management:        Not Supported
00:21:31.650    Delete Endurance Group:              Not Supported
00:21:31.650    Delete NVM Set:                      Not Supported
00:21:31.650    Extended LBA Formats Supported:      Not Supported
00:21:31.650    Flexible Data Placement Supported:   Not Supported
00:21:31.650  
00:21:31.650  Controller Memory Buffer Support
00:21:31.650  ================================
00:21:31.650  Supported:                             No
00:21:31.650  
00:21:31.650  Persistent Memory Region Support
00:21:31.650  ================================
00:21:31.650  Supported:                             No
00:21:31.650  
00:21:31.650  Admin Command Set Attributes
00:21:31.650  ============================
00:21:31.650  Security Send/Receive:                 Not Supported
00:21:31.650  Format NVM:                            Not Supported
00:21:31.650  Firmware Activate/Download:            Not Supported
00:21:31.650  Namespace Management:                  Not Supported
00:21:31.650  Device Self-Test:                      Not Supported
00:21:31.650  Directives:                            Not Supported
00:21:31.650  NVMe-MI:                               Not Supported
00:21:31.650  Virtualization Management:             Not Supported
00:21:31.650  Doorbell Buffer Config:                Not Supported
00:21:31.650  Get LBA Status Capability:             Not Supported
00:21:31.650  Command & Feature Lockdown Capability: Not Supported
00:21:31.650  Abort Command Limit:                   1
00:21:31.650  Async Event Request Limit:             4
00:21:31.650  Number of Firmware Slots:              N/A
00:21:31.650  Firmware Slot 1 Read-Only:             N/A
00:21:31.650  Firmware Activation Without Reset:     N/A
00:21:31.650  Multiple Update Detection Support:     N/A
00:21:31.650  Firmware Update Granularity:           No Information Provided
00:21:31.650  Per-Namespace SMART Log:               No
00:21:31.650  Asymmetric Namespace Access Log Page:  Not Supported
00:21:31.650  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:21:31.650  Command Effects Log Page:              Not Supported
00:21:31.650  Get Log Page Extended Data:            Supported
00:21:31.650  Telemetry Log Pages:                   Not Supported
00:21:31.650  Persistent Event Log Pages:            Not Supported
00:21:31.650  Supported Log Pages Log Page:          May Support
00:21:31.650  Commands Supported & Effects Log Page: Not Supported
00:21:31.650  Feature Identifiers & Effects Log Page:May Support
00:21:31.650  NVMe-MI Commands & Effects Log Page:   May Support
00:21:31.650  Data Area 4 for Telemetry Log:         Not Supported
00:21:31.650  Error Log Page Entries Supported:      128
00:21:31.650  Keep Alive:                            Not Supported
00:21:31.650  
00:21:31.650  NVM Command Set Attributes
00:21:31.650  ==========================
00:21:31.650  Submission Queue Entry Size
00:21:31.650    Max:                       1
00:21:31.650    Min:                       1
00:21:31.650  Completion Queue Entry Size
00:21:31.650    Max:                       1
00:21:31.650    Min:                       1
00:21:31.650  Number of Namespaces:        0
00:21:31.650  Compare Command:             Not Supported
00:21:31.650  Write Uncorrectable Command: Not Supported
00:21:31.650  Dataset Management Command:  Not Supported
00:21:31.650  Write Zeroes Command:        Not Supported
00:21:31.650  Set Features Save Field:     Not Supported
00:21:31.650  Reservations:                Not Supported
00:21:31.650  Timestamp:                   Not Supported
00:21:31.650  Copy:                        Not Supported
00:21:31.650  Volatile Write Cache:        Not Present
00:21:31.650  Atomic Write Unit (Normal):  1
00:21:31.650  Atomic Write Unit (PFail):   1
00:21:31.650  Atomic Compare & Write Unit: 1
00:21:31.650  Fused Compare & Write:       Supported
00:21:31.650  Scatter-Gather List
00:21:31.650    SGL Command Set:           Supported
00:21:31.650    SGL Keyed:                 Supported
00:21:31.650    SGL Bit Bucket Descriptor: Not Supported
00:21:31.650    SGL Metadata Pointer:      Not Supported
00:21:31.650    Oversized SGL:             Not Supported
00:21:31.650    SGL Metadata Address:      Not Supported
00:21:31.650    SGL Offset:                Supported
00:21:31.650    Transport SGL Data Block:  Not Supported
00:21:31.650  Replay Protected Memory Block:  Not Supported
00:21:31.651  
00:21:31.651  Firmware Slot Information
00:21:31.651  =========================
00:21:31.651  Active slot:                 0
00:21:31.651  
00:21:31.651  
00:21:31.651  Error Log
00:21:31.651  =========
00:21:31.651  
00:21:31.651  Active Namespaces
00:21:31.651  =================
00:21:31.651  Discovery Log Page
00:21:31.651  ==================
00:21:31.651  Generation Counter:                    2
00:21:31.651  Number of Records:                     2
00:21:31.651  Record Format:                         0
00:21:31.651  
00:21:31.651  Discovery Log Entry 0
00:21:31.651  ----------------------
00:21:31.651  Transport Type:                        3 (TCP)
00:21:31.651  Address Family:                        1 (IPv4)
00:21:31.651  Subsystem Type:                        3 (Current Discovery Subsystem)
00:21:31.651  Entry Flags:
00:21:31.651    Duplicate Returned Information:			1
00:21:31.651    Explicit Persistent Connection Support for Discovery: 1
00:21:31.651  Transport Requirements:
00:21:31.651    Secure Channel:                      Not Required
00:21:31.651  Port ID:                               0 (0x0000)
00:21:31.651  Controller ID:                         65535 (0xffff)
00:21:31.651  Admin Max SQ Size:                     128
00:21:31.651  Transport Service Identifier:          4420                            
00:21:31.651  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:21:31.651  Transport Address:                     10.0.0.2                                                                                                                                                                                                                                                        
00:21:31.651  Discovery Log Entry 1
00:21:31.651  ----------------------
00:21:31.651  Transport Type:                        3 (TCP)
00:21:31.651  Address Family:                        1 (IPv4)
00:21:31.651  Subsystem Type:                        2 (NVM Subsystem)
00:21:31.651  Entry Flags:
00:21:31.651    Duplicate Returned Information:			0
00:21:31.651    Explicit Persistent Connection Support for Discovery: 0
00:21:31.651  Transport Requirements:
00:21:31.651    Secure Channel:                      Not Required
00:21:31.651  Port ID:                               0 (0x0000)
00:21:31.651  Controller ID:                         65535 (0xffff)
00:21:31.651  Admin Max SQ Size:                     128
00:21:31.651  Transport Service Identifier:          4420                            
00:21:31.651  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:cnode1
00:21:31.651  Transport Address:                     10.0.0.2                              [2024-12-09 04:12:00.024479] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD
00:21:31.651  [2024-12-09 04:12:00.024502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.024515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.651  [2024-12-09 04:12:00.024534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293280) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.024542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.651  [2024-12-09 04:12:00.024551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293400) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.024558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.651  [2024-12-09 04:12:00.024567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.024574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.651  [2024-12-09 04:12:00.024594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.024605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.024611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.651  [2024-12-09 04:12:00.024623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.651  [2024-12-09 04:12:00.024650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.651  [2024-12-09 04:12:00.024747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.651  [2024-12-09 04:12:00.024762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.651  [2024-12-09 04:12:00.024770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.024777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.024790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.024798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.024804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.651  [2024-12-09 04:12:00.024815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.651  [2024-12-09 04:12:00.024843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.651  [2024-12-09 04:12:00.024967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.651  [2024-12-09 04:12:00.024980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.651  [2024-12-09 04:12:00.024987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.024994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.025002] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us
00:21:31.651  [2024-12-09 04:12:00.025010] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms
00:21:31.651  [2024-12-09 04:12:00.025026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.651  [2024-12-09 04:12:00.025053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.651  [2024-12-09 04:12:00.025079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.651  [2024-12-09 04:12:00.025166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.651  [2024-12-09 04:12:00.025181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.651  [2024-12-09 04:12:00.025188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.025212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.651  [2024-12-09 04:12:00.025239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.651  [2024-12-09 04:12:00.025260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.651  [2024-12-09 04:12:00.025360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.651  [2024-12-09 04:12:00.025374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.651  [2024-12-09 04:12:00.025381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.025404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.651  [2024-12-09 04:12:00.025431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.651  [2024-12-09 04:12:00.025453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.651  [2024-12-09 04:12:00.025549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.651  [2024-12-09 04:12:00.025564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.651  [2024-12-09 04:12:00.025571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.025594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.651  [2024-12-09 04:12:00.025621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.651  [2024-12-09 04:12:00.025642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.651  [2024-12-09 04:12:00.025723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.651  [2024-12-09 04:12:00.025736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.651  [2024-12-09 04:12:00.025743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.025767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.651  [2024-12-09 04:12:00.025794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.651  [2024-12-09 04:12:00.025816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.651  [2024-12-09 04:12:00.025910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.651  [2024-12-09 04:12:00.025924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.651  [2024-12-09 04:12:00.025931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.651  [2024-12-09 04:12:00.025955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.651  [2024-12-09 04:12:00.025965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.025971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.025981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.026003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.026099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.026113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.026120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.026144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.026170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.026192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.026299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.026314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.026321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.026344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.026371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.026392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.026517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.026531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.026538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.026560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.026587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.026619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.026698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.026713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.026720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.026743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.026770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.026791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.026920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.026934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.026941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.026964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.026980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.026991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.027013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.027122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.027136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.027143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.027166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.027193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.027214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.027326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.027342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.027349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.027373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.027399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.027421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.027505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.027519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.027535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.027559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.027585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.027606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.027731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.027744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.027751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.027774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.027800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.027821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.027950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.027964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.027971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.027978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.027994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.028003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.028009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.028020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.028041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.028119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.028133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.028140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.028147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.028163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.028173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.028179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.652  [2024-12-09 04:12:00.028189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.652  [2024-12-09 04:12:00.028211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.652  [2024-12-09 04:12:00.032283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.652  [2024-12-09 04:12:00.032300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.652  [2024-12-09 04:12:00.032308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.032322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.652  [2024-12-09 04:12:00.032342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.652  [2024-12-09 04:12:00.032352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.032358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690)
00:21:31.653  [2024-12-09 04:12:00.032369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.653  [2024-12-09 04:12:00.032392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0
00:21:31.653  [2024-12-09 04:12:00.032527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.032541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.653  [2024-12-09 04:12:00.032548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.032555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690
00:21:31.653  [2024-12-09 04:12:00.032568] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds
00:21:31.653                                                                                                                                                                                                                            
00:21:31.653   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -L all
00:21:31.653  [2024-12-09 04:12:00.067791] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:31.653  [2024-12-09 04:12:00.067832] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292197 ]
00:21:31.653  [2024-12-09 04:12:00.117887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout)
00:21:31.653  [2024-12-09 04:12:00.117944] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:21:31.653  [2024-12-09 04:12:00.117955] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:21:31.653  [2024-12-09 04:12:00.117978] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:21:31.653  [2024-12-09 04:12:00.117992] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:21:31.653  [2024-12-09 04:12:00.121687] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout)
00:21:31.653  [2024-12-09 04:12:00.121749] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1287690 0
00:21:31.653  [2024-12-09 04:12:00.121881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:21:31.653  [2024-12-09 04:12:00.121897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:21:31.653  [2024-12-09 04:12:00.121909] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:21:31.653  [2024-12-09 04:12:00.121916] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:21:31.653  [2024-12-09 04:12:00.121952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.121964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.121970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.653  [2024-12-09 04:12:00.121985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:21:31.653  [2024-12-09 04:12:00.122011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.653  [2024-12-09 04:12:00.129288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.129307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.653  [2024-12-09 04:12:00.129315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.653  [2024-12-09 04:12:00.129336] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:21:31.653  [2024-12-09 04:12:00.129363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout)
00:21:31.653  [2024-12-09 04:12:00.129373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout)
00:21:31.653  [2024-12-09 04:12:00.129394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.653  [2024-12-09 04:12:00.129422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.653  [2024-12-09 04:12:00.129447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.653  [2024-12-09 04:12:00.129548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.129562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.653  [2024-12-09 04:12:00.129570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.653  [2024-12-09 04:12:00.129590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout)
00:21:31.653  [2024-12-09 04:12:00.129604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout)
00:21:31.653  [2024-12-09 04:12:00.129617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.653  [2024-12-09 04:12:00.129643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.653  [2024-12-09 04:12:00.129665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.653  [2024-12-09 04:12:00.129793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.129805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.653  [2024-12-09 04:12:00.129812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.653  [2024-12-09 04:12:00.129829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout)
00:21:31.653  [2024-12-09 04:12:00.129843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms)
00:21:31.653  [2024-12-09 04:12:00.129855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.129869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.653  [2024-12-09 04:12:00.129880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.653  [2024-12-09 04:12:00.129902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.653  [2024-12-09 04:12:00.129991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.130008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.653  [2024-12-09 04:12:00.130016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.653  [2024-12-09 04:12:00.130031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:21:31.653  [2024-12-09 04:12:00.130064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.653  [2024-12-09 04:12:00.130091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.653  [2024-12-09 04:12:00.130112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.653  [2024-12-09 04:12:00.130238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.130250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.653  [2024-12-09 04:12:00.130258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.653  [2024-12-09 04:12:00.130282] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0
00:21:31.653  [2024-12-09 04:12:00.130292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms)
00:21:31.653  [2024-12-09 04:12:00.130306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:21:31.653  [2024-12-09 04:12:00.130428] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1
00:21:31.653  [2024-12-09 04:12:00.130436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:21:31.653  [2024-12-09 04:12:00.130450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.653  [2024-12-09 04:12:00.130475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.653  [2024-12-09 04:12:00.130497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.653  [2024-12-09 04:12:00.130578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.130592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.653  [2024-12-09 04:12:00.130600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.653  [2024-12-09 04:12:00.130615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:21:31.653  [2024-12-09 04:12:00.130632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.653  [2024-12-09 04:12:00.130648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.653  [2024-12-09 04:12:00.130658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.653  [2024-12-09 04:12:00.130680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.653  [2024-12-09 04:12:00.130776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.653  [2024-12-09 04:12:00.130790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.654  [2024-12-09 04:12:00.130797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.130804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.654  [2024-12-09 04:12:00.130812] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:21:31.654  [2024-12-09 04:12:00.130820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.130834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout)
00:21:31.654  [2024-12-09 04:12:00.130850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.130866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.130874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.130885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.654  [2024-12-09 04:12:00.130907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.654  [2024-12-09 04:12:00.131063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.654  [2024-12-09 04:12:00.131077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.654  [2024-12-09 04:12:00.131084] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131090] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=0
00:21:31.654  [2024-12-09 04:12:00.131112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9100) on tqpair(0x1287690): expected_datao=0, payload_size=4096
00:21:31.654  [2024-12-09 04:12:00.131121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131132] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131140] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.654  [2024-12-09 04:12:00.131174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.654  [2024-12-09 04:12:00.131181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.654  [2024-12-09 04:12:00.131205] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295
00:21:31.654  [2024-12-09 04:12:00.131215] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072
00:21:31.654  [2024-12-09 04:12:00.131223] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001
00:21:31.654  [2024-12-09 04:12:00.131230] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16
00:21:31.654  [2024-12-09 04:12:00.131239] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1
00:21:31.654  [2024-12-09 04:12:00.131247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.131262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.131282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.131314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:21:31.654  [2024-12-09 04:12:00.131337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.654  [2024-12-09 04:12:00.131466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.654  [2024-12-09 04:12:00.131480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.654  [2024-12-09 04:12:00.131487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.654  [2024-12-09 04:12:00.131505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.131529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.654  [2024-12-09 04:12:00.131539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.131562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.654  [2024-12-09 04:12:00.131572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.131593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.654  [2024-12-09 04:12:00.131603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.131624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.654  [2024-12-09 04:12:00.131633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.131653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.131666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.131685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.654  [2024-12-09 04:12:00.131708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0
00:21:31.654  [2024-12-09 04:12:00.131719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9280, cid 1, qid 0
00:21:31.654  [2024-12-09 04:12:00.131727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9400, cid 2, qid 0
00:21:31.654  [2024-12-09 04:12:00.131735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.654  [2024-12-09 04:12:00.131743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0
00:21:31.654  [2024-12-09 04:12:00.131871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.654  [2024-12-09 04:12:00.131886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.654  [2024-12-09 04:12:00.131893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690
00:21:31.654  [2024-12-09 04:12:00.131909] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us
00:21:31.654  [2024-12-09 04:12:00.131918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.131932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.131943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms)
00:21:31.654  [2024-12-09 04:12:00.131954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.131968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690)
00:21:31.654  [2024-12-09 04:12:00.131978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:21:31.654  [2024-12-09 04:12:00.132000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0
00:21:31.654  [2024-12-09 04:12:00.132081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.654  [2024-12-09 04:12:00.132095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.654  [2024-12-09 04:12:00.132102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.654  [2024-12-09 04:12:00.132109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690
00:21:31.654  [2024-12-09 04:12:00.132177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.132197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.132212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.132231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.655  [2024-12-09 04:12:00.132252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0
00:21:31.655  [2024-12-09 04:12:00.132388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.655  [2024-12-09 04:12:00.132402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.655  [2024-12-09 04:12:00.132409] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132416] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=4
00:21:31.655  [2024-12-09 04:12:00.132423] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=4096
00:21:31.655  [2024-12-09 04:12:00.132431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132441] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.655  [2024-12-09 04:12:00.132470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.655  [2024-12-09 04:12:00.132476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690
00:21:31.655  [2024-12-09 04:12:00.132502] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added
00:21:31.655  [2024-12-09 04:12:00.132525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.132544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.132557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.132576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.655  [2024-12-09 04:12:00.132598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0
00:21:31.655  [2024-12-09 04:12:00.132734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.655  [2024-12-09 04:12:00.132749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.655  [2024-12-09 04:12:00.132756] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132763] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=4
00:21:31.655  [2024-12-09 04:12:00.132770] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=4096
00:21:31.655  [2024-12-09 04:12:00.132778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132795] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.655  [2024-12-09 04:12:00.132817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.655  [2024-12-09 04:12:00.132824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690
00:21:31.655  [2024-12-09 04:12:00.132851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.132869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.132884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.132892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.132903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.655  [2024-12-09 04:12:00.132925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0
00:21:31.655  [2024-12-09 04:12:00.133060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.655  [2024-12-09 04:12:00.133074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.655  [2024-12-09 04:12:00.133081] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.133088] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=4
00:21:31.655  [2024-12-09 04:12:00.133096] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=4096
00:21:31.655  [2024-12-09 04:12:00.133103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.133114] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.133125] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.133153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.655  [2024-12-09 04:12:00.133163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.655  [2024-12-09 04:12:00.133170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.133177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690
00:21:31.655  [2024-12-09 04:12:00.133189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.133220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.133235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.133249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.133259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.133268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.137294] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID
00:21:31.655  [2024-12-09 04:12:00.137305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms)
00:21:31.655  [2024-12-09 04:12:00.137314] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout)
00:21:31.655  [2024-12-09 04:12:00.137333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.137352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.655  [2024-12-09 04:12:00.137363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.137385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:31.655  [2024-12-09 04:12:00.137412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0
00:21:31.655  [2024-12-09 04:12:00.137440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0
00:21:31.655  [2024-12-09 04:12:00.137567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.655  [2024-12-09 04:12:00.137582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.655  [2024-12-09 04:12:00.137589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690
00:21:31.655  [2024-12-09 04:12:00.137606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.655  [2024-12-09 04:12:00.137615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.655  [2024-12-09 04:12:00.137622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690
00:21:31.655  [2024-12-09 04:12:00.137644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.137667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.655  [2024-12-09 04:12:00.137690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0
00:21:31.655  [2024-12-09 04:12:00.137817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.655  [2024-12-09 04:12:00.137831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.655  [2024-12-09 04:12:00.137838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690
00:21:31.655  [2024-12-09 04:12:00.137860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.137869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.137880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.655  [2024-12-09 04:12:00.137901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0
00:21:31.655  [2024-12-09 04:12:00.138044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.655  [2024-12-09 04:12:00.138058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.655  [2024-12-09 04:12:00.138065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.138072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690
00:21:31.655  [2024-12-09 04:12:00.138088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.655  [2024-12-09 04:12:00.138098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690)
00:21:31.655  [2024-12-09 04:12:00.138108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.655  [2024-12-09 04:12:00.138130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0
00:21:31.655  [2024-12-09 04:12:00.138211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.656  [2024-12-09 04:12:00.138224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.656  [2024-12-09 04:12:00.138231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690
00:21:31.656  [2024-12-09 04:12:00.138266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690)
00:21:31.656  [2024-12-09 04:12:00.138297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.656  [2024-12-09 04:12:00.138310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690)
00:21:31.656  [2024-12-09 04:12:00.138328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.656  [2024-12-09 04:12:00.138340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1287690)
00:21:31.656  [2024-12-09 04:12:00.138357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.656  [2024-12-09 04:12:00.138369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1287690)
00:21:31.656  [2024-12-09 04:12:00.138391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.656  [2024-12-09 04:12:00.138415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0
00:21:31.656  [2024-12-09 04:12:00.138426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0
00:21:31.656  [2024-12-09 04:12:00.138434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9a00, cid 6, qid 0
00:21:31.656  [2024-12-09 04:12:00.138442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9b80, cid 7, qid 0
00:21:31.656  [2024-12-09 04:12:00.138622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.656  [2024-12-09 04:12:00.138637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.656  [2024-12-09 04:12:00.138644] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138651] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=8192, cccid=5
00:21:31.656  [2024-12-09 04:12:00.138659] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9880) on tqpair(0x1287690): expected_datao=0, payload_size=8192
00:21:31.656  [2024-12-09 04:12:00.138667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138677] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138685] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.656  [2024-12-09 04:12:00.138703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.656  [2024-12-09 04:12:00.138709] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138716] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=512, cccid=4
00:21:31.656  [2024-12-09 04:12:00.138723] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=512
00:21:31.656  [2024-12-09 04:12:00.138731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.656  [2024-12-09 04:12:00.138764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.656  [2024-12-09 04:12:00.138771] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138777] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=512, cccid=6
00:21:31.656  [2024-12-09 04:12:00.138784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9a00) on tqpair(0x1287690): expected_datao=0, payload_size=512
00:21:31.656  [2024-12-09 04:12:00.138792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138801] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:21:31.656  [2024-12-09 04:12:00.138825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:21:31.656  [2024-12-09 04:12:00.138832] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138838] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=7
00:21:31.656  [2024-12-09 04:12:00.138846] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9b80) on tqpair(0x1287690): expected_datao=0, payload_size=4096
00:21:31.656  [2024-12-09 04:12:00.138853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138863] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138870] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.656  [2024-12-09 04:12:00.138897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.656  [2024-12-09 04:12:00.138904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690
00:21:31.656  [2024-12-09 04:12:00.138930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.656  [2024-12-09 04:12:00.138941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.656  [2024-12-09 04:12:00.138948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690
00:21:31.656  [2024-12-09 04:12:00.138971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.656  [2024-12-09 04:12:00.138982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.656  [2024-12-09 04:12:00.138989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.138996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9a00) on tqpair=0x1287690
00:21:31.656  [2024-12-09 04:12:00.139007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.656  [2024-12-09 04:12:00.139031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.656  [2024-12-09 04:12:00.139038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.656  [2024-12-09 04:12:00.139045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9b80) on tqpair=0x1287690
00:21:31.656  =====================================================
00:21:31.656  NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:31.656  =====================================================
00:21:31.656  Controller Capabilities/Features
00:21:31.656  ================================
00:21:31.656  Vendor ID:                             8086
00:21:31.656  Subsystem Vendor ID:                   8086
00:21:31.656  Serial Number:                         SPDK00000000000001
00:21:31.656  Model Number:                          SPDK bdev Controller
00:21:31.656  Firmware Version:                      25.01
00:21:31.656  Recommended Arb Burst:                 6
00:21:31.656  IEEE OUI Identifier:                   e4 d2 5c
00:21:31.656  Multi-path I/O
00:21:31.656    May have multiple subsystem ports:   Yes
00:21:31.656    May have multiple controllers:       Yes
00:21:31.656    Associated with SR-IOV VF:           No
00:21:31.656  Max Data Transfer Size:                131072
00:21:31.656  Max Number of Namespaces:              32
00:21:31.656  Max Number of I/O Queues:              127
00:21:31.656  NVMe Specification Version (VS):       1.3
00:21:31.656  NVMe Specification Version (Identify): 1.3
00:21:31.656  Maximum Queue Entries:                 128
00:21:31.656  Contiguous Queues Required:            Yes
00:21:31.656  Arbitration Mechanisms Supported
00:21:31.656    Weighted Round Robin:                Not Supported
00:21:31.656    Vendor Specific:                     Not Supported
00:21:31.656  Reset Timeout:                         15000 ms
00:21:31.656  Doorbell Stride:                       4 bytes
00:21:31.656  NVM Subsystem Reset:                   Not Supported
00:21:31.656  Command Sets Supported
00:21:31.656    NVM Command Set:                     Supported
00:21:31.656  Boot Partition:                        Not Supported
00:21:31.656  Memory Page Size Minimum:              4096 bytes
00:21:31.656  Memory Page Size Maximum:              4096 bytes
00:21:31.656  Persistent Memory Region:              Not Supported
00:21:31.656  Optional Asynchronous Events Supported
00:21:31.656    Namespace Attribute Notices:         Supported
00:21:31.656    Firmware Activation Notices:         Not Supported
00:21:31.656    ANA Change Notices:                  Not Supported
00:21:31.656    PLE Aggregate Log Change Notices:    Not Supported
00:21:31.656    LBA Status Info Alert Notices:       Not Supported
00:21:31.656    EGE Aggregate Log Change Notices:    Not Supported
00:21:31.656    Normal NVM Subsystem Shutdown event: Not Supported
00:21:31.656    Zone Descriptor Change Notices:      Not Supported
00:21:31.656    Discovery Log Change Notices:        Not Supported
00:21:31.656  Controller Attributes
00:21:31.656    128-bit Host Identifier:             Supported
00:21:31.656    Non-Operational Permissive Mode:     Not Supported
00:21:31.656    NVM Sets:                            Not Supported
00:21:31.656    Read Recovery Levels:                Not Supported
00:21:31.656    Endurance Groups:                    Not Supported
00:21:31.656    Predictable Latency Mode:            Not Supported
00:21:31.656    Traffic Based Keep ALive:            Not Supported
00:21:31.656    Namespace Granularity:               Not Supported
00:21:31.656    SQ Associations:                     Not Supported
00:21:31.656    UUID List:                           Not Supported
00:21:31.656    Multi-Domain Subsystem:              Not Supported
00:21:31.656    Fixed Capacity Management:           Not Supported
00:21:31.656    Variable Capacity Management:        Not Supported
00:21:31.656    Delete Endurance Group:              Not Supported
00:21:31.656    Delete NVM Set:                      Not Supported
00:21:31.656    Extended LBA Formats Supported:      Not Supported
00:21:31.656    Flexible Data Placement Supported:   Not Supported
00:21:31.656  
00:21:31.656  Controller Memory Buffer Support
00:21:31.656  ================================
00:21:31.656  Supported:                             No
00:21:31.656  
00:21:31.656  Persistent Memory Region Support
00:21:31.657  ================================
00:21:31.657  Supported:                             No
00:21:31.657  
00:21:31.657  Admin Command Set Attributes
00:21:31.657  ============================
00:21:31.657  Security Send/Receive:                 Not Supported
00:21:31.657  Format NVM:                            Not Supported
00:21:31.657  Firmware Activate/Download:            Not Supported
00:21:31.657  Namespace Management:                  Not Supported
00:21:31.657  Device Self-Test:                      Not Supported
00:21:31.657  Directives:                            Not Supported
00:21:31.657  NVMe-MI:                               Not Supported
00:21:31.657  Virtualization Management:             Not Supported
00:21:31.657  Doorbell Buffer Config:                Not Supported
00:21:31.657  Get LBA Status Capability:             Not Supported
00:21:31.657  Command & Feature Lockdown Capability: Not Supported
00:21:31.657  Abort Command Limit:                   4
00:21:31.657  Async Event Request Limit:             4
00:21:31.657  Number of Firmware Slots:              N/A
00:21:31.657  Firmware Slot 1 Read-Only:             N/A
00:21:31.657  Firmware Activation Without Reset:     N/A
00:21:31.657  Multiple Update Detection Support:     N/A
00:21:31.657  Firmware Update Granularity:           No Information Provided
00:21:31.657  Per-Namespace SMART Log:               No
00:21:31.657  Asymmetric Namespace Access Log Page:  Not Supported
00:21:31.657  Subsystem NQN:                         nqn.2016-06.io.spdk:cnode1
00:21:31.657  Command Effects Log Page:              Supported
00:21:31.657  Get Log Page Extended Data:            Supported
00:21:31.657  Telemetry Log Pages:                   Not Supported
00:21:31.657  Persistent Event Log Pages:            Not Supported
00:21:31.657  Supported Log Pages Log Page:          May Support
00:21:31.657  Commands Supported & Effects Log Page: Not Supported
00:21:31.657  Feature Identifiers & Effects Log Page:May Support
00:21:31.657  NVMe-MI Commands & Effects Log Page:   May Support
00:21:31.657  Data Area 4 for Telemetry Log:         Not Supported
00:21:31.657  Error Log Page Entries Supported:      128
00:21:31.657  Keep Alive:                            Supported
00:21:31.657  Keep Alive Granularity:                10000 ms
00:21:31.657  
00:21:31.657  NVM Command Set Attributes
00:21:31.657  ==========================
00:21:31.657  Submission Queue Entry Size
00:21:31.657    Max:                       64
00:21:31.657    Min:                       64
00:21:31.657  Completion Queue Entry Size
00:21:31.657    Max:                       16
00:21:31.657    Min:                       16
00:21:31.657  Number of Namespaces:        32
00:21:31.657  Compare Command:             Supported
00:21:31.657  Write Uncorrectable Command: Not Supported
00:21:31.657  Dataset Management Command:  Supported
00:21:31.657  Write Zeroes Command:        Supported
00:21:31.657  Set Features Save Field:     Not Supported
00:21:31.657  Reservations:                Supported
00:21:31.657  Timestamp:                   Not Supported
00:21:31.657  Copy:                        Supported
00:21:31.657  Volatile Write Cache:        Present
00:21:31.657  Atomic Write Unit (Normal):  1
00:21:31.657  Atomic Write Unit (PFail):   1
00:21:31.657  Atomic Compare & Write Unit: 1
00:21:31.657  Fused Compare & Write:       Supported
00:21:31.657  Scatter-Gather List
00:21:31.657    SGL Command Set:           Supported
00:21:31.657    SGL Keyed:                 Supported
00:21:31.657    SGL Bit Bucket Descriptor: Not Supported
00:21:31.657    SGL Metadata Pointer:      Not Supported
00:21:31.657    Oversized SGL:             Not Supported
00:21:31.657    SGL Metadata Address:      Not Supported
00:21:31.657    SGL Offset:                Supported
00:21:31.657    Transport SGL Data Block:  Not Supported
00:21:31.657  Replay Protected Memory Block:  Not Supported
00:21:31.657  
00:21:31.657  Firmware Slot Information
00:21:31.657  =========================
00:21:31.657  Active slot:                 1
00:21:31.657  Slot 1 Firmware Revision:    25.01
00:21:31.657  
00:21:31.657  
00:21:31.657  Commands Supported and Effects
00:21:31.657  ==============================
00:21:31.657  Admin Commands
00:21:31.657  --------------
00:21:31.657                    Get Log Page (02h): Supported 
00:21:31.657                        Identify (06h): Supported 
00:21:31.657                           Abort (08h): Supported 
00:21:31.657                    Set Features (09h): Supported 
00:21:31.657                    Get Features (0Ah): Supported 
00:21:31.657      Asynchronous Event Request (0Ch): Supported 
00:21:31.657                      Keep Alive (18h): Supported 
00:21:31.657  I/O Commands
00:21:31.657  ------------
00:21:31.657                           Flush (00h): Supported LBA-Change 
00:21:31.657                           Write (01h): Supported LBA-Change 
00:21:31.657                            Read (02h): Supported 
00:21:31.657                         Compare (05h): Supported 
00:21:31.657                    Write Zeroes (08h): Supported LBA-Change 
00:21:31.657              Dataset Management (09h): Supported LBA-Change 
00:21:31.657                            Copy (19h): Supported LBA-Change 
00:21:31.657  
00:21:31.657  Error Log
00:21:31.657  =========
00:21:31.657  
00:21:31.657  Arbitration
00:21:31.657  ===========
00:21:31.657  Arbitration Burst:           1
00:21:31.657  
00:21:31.657  Power Management
00:21:31.657  ================
00:21:31.657  Number of Power States:          1
00:21:31.657  Current Power State:             Power State #0
00:21:31.657  Power State #0:
00:21:31.657    Max Power:                      0.00 W
00:21:31.657    Non-Operational State:         Operational
00:21:31.657    Entry Latency:                 Not Reported
00:21:31.657    Exit Latency:                  Not Reported
00:21:31.657    Relative Read Throughput:      0
00:21:31.657    Relative Read Latency:         0
00:21:31.657    Relative Write Throughput:     0
00:21:31.657    Relative Write Latency:        0
00:21:31.657    Idle Power:                     Not Reported
00:21:31.657    Active Power:                   Not Reported
00:21:31.657  Non-Operational Permissive Mode: Not Supported
00:21:31.657  
00:21:31.657  Health Information
00:21:31.657  ==================
00:21:31.657  Critical Warnings:
00:21:31.657    Available Spare Space:     OK
00:21:31.657    Temperature:               OK
00:21:31.657    Device Reliability:        OK
00:21:31.657    Read Only:                 No
00:21:31.657    Volatile Memory Backup:    OK
00:21:31.657  Current Temperature:         0 Kelvin (-273 Celsius)
00:21:31.657  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:21:31.657  Available Spare:             0%
00:21:31.657  Available Spare Threshold:   0%
00:21:31.657  Life Percentage Used:[2024-12-09 04:12:00.139158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1287690)
00:21:31.657  [2024-12-09 04:12:00.139180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.657  [2024-12-09 04:12:00.139202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9b80, cid 7, qid 0
00:21:31.657  [2024-12-09 04:12:00.139333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.657  [2024-12-09 04:12:00.139348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.657  [2024-12-09 04:12:00.139356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9b80) on tqpair=0x1287690
00:21:31.657  [2024-12-09 04:12:00.139411] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD
00:21:31.657  [2024-12-09 04:12:00.139431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690
00:21:31.657  [2024-12-09 04:12:00.139443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.657  [2024-12-09 04:12:00.139452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9280) on tqpair=0x1287690
00:21:31.657  [2024-12-09 04:12:00.139460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.657  [2024-12-09 04:12:00.139468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9400) on tqpair=0x1287690
00:21:31.657  [2024-12-09 04:12:00.139476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.657  [2024-12-09 04:12:00.139484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.657  [2024-12-09 04:12:00.139492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:31.657  [2024-12-09 04:12:00.139504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.657  [2024-12-09 04:12:00.139533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.657  [2024-12-09 04:12:00.139557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.657  [2024-12-09 04:12:00.139636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.657  [2024-12-09 04:12:00.139650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.657  [2024-12-09 04:12:00.139657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.657  [2024-12-09 04:12:00.139676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.657  [2024-12-09 04:12:00.139701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.657  [2024-12-09 04:12:00.139727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.657  [2024-12-09 04:12:00.139836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.657  [2024-12-09 04:12:00.139850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.657  [2024-12-09 04:12:00.139857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.657  [2024-12-09 04:12:00.139864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.657  [2024-12-09 04:12:00.139872] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us
00:21:31.658  [2024-12-09 04:12:00.139880] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms
00:21:31.658  [2024-12-09 04:12:00.139896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.139905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.139912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.139923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.139944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.140038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.140050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.140057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.140080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.140106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.140127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.140201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.140214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.140222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.140245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.140284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.140307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.140386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.140400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.140407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.140430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.140457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.140478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.140584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.140596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.140603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.140626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.140652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.140673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.140747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.140759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.140766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.140788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.140814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.140835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.140909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.140923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.140930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.140953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.140974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.140985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.141006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.141079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.141092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.141099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.141105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.141121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.141130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.141137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.141147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.141168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.141240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.141252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.141259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.141266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.145295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.145308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.145315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690)
00:21:31.658  [2024-12-09 04:12:00.145341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:31.658  [2024-12-09 04:12:00.145364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0
00:21:31.658  [2024-12-09 04:12:00.145492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:21:31.658  [2024-12-09 04:12:00.145504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:21:31.658  [2024-12-09 04:12:00.145512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:21:31.658  [2024-12-09 04:12:00.145519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690
00:21:31.658  [2024-12-09 04:12:00.145532] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds
00:21:31.658          0%
00:21:31.658  Data Units Read:             0
00:21:31.658  Data Units Written:          0
00:21:31.658  Host Read Commands:          0
00:21:31.658  Host Write Commands:         0
00:21:31.658  Controller Busy Time:        0 minutes
00:21:31.658  Power Cycles:                0
00:21:31.658  Power On Hours:              0 hours
00:21:31.658  Unsafe Shutdowns:            0
00:21:31.658  Unrecoverable Media Errors:  0
00:21:31.658  Lifetime Error Log Entries:  0
00:21:31.658  Warning Temperature Time:    0 minutes
00:21:31.658  Critical Temperature Time:   0 minutes
00:21:31.658  
00:21:31.658  Number of Queues
00:21:31.658  ================
00:21:31.658  Number of I/O Submission Queues:      127
00:21:31.658  Number of I/O Completion Queues:      127
00:21:31.658  
00:21:31.658  Active Namespaces
00:21:31.658  =================
00:21:31.658  Namespace ID:1
00:21:31.658  Error Recovery Timeout:                Unlimited
00:21:31.658  Command Set Identifier:                NVM (00h)
00:21:31.658  Deallocate:                            Supported
00:21:31.658  Deallocated/Unwritten Error:           Not Supported
00:21:31.658  Deallocated Read Value:                Unknown
00:21:31.658  Deallocate in Write Zeroes:            Not Supported
00:21:31.658  Deallocated Guard Field:               0xFFFF
00:21:31.658  Flush:                                 Supported
00:21:31.658  Reservation:                           Supported
00:21:31.658  Namespace Sharing Capabilities:        Multiple Controllers
00:21:31.658  Size (in LBAs):                        131072 (0GiB)
00:21:31.658  Capacity (in LBAs):                    131072 (0GiB)
00:21:31.658  Utilization (in LBAs):                 131072 (0GiB)
00:21:31.658  NGUID:                                 ABCDEF0123456789ABCDEF0123456789
00:21:31.658  EUI64:                                 ABCDEF0123456789
00:21:31.658  UUID:                                  c1f55b3a-c777-461f-8bb4-17aef1175c5a
00:21:31.658  Thin Provisioning:                     Not Supported
00:21:31.658  Per-NS Atomic Units:                   Yes
00:21:31.658    Atomic Boundary Size (Normal):       0
00:21:31.658    Atomic Boundary Size (PFail):        0
00:21:31.658    Atomic Boundary Offset:              0
00:21:31.658  Maximum Single Source Range Length:    65535
00:21:31.658  Maximum Copy Length:                   65535
00:21:31.658  Maximum Source Range Count:            1
00:21:31.658  NGUID/EUI64 Never Reused:              No
00:21:31.659  Namespace Write Protected:             No
00:21:31.659  Number of LBA Formats:                 1
00:21:31.659  Current LBA Format:                    LBA Format #00
00:21:31.659  LBA Format #00: Data Size:   512  Metadata Size:     0
00:21:31.659  
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:31.659   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:31.659  rmmod nvme_tcp
00:21:31.659  rmmod nvme_fabrics
00:21:31.659  rmmod nvme_keyring
00:21:31.916   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 292082 ']'
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 292082
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 292082 ']'
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 292082
00:21:31.917    04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:31.917    04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292082
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292082'
00:21:31.917  killing process with pid 292082
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 292082
00:21:31.917   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 292082
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:32.174   04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:32.174    04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:21:34.098  
00:21:34.098  real	0m5.677s
00:21:34.098  user	0m4.715s
00:21:34.098  sys	0m2.051s
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:21:34.098  ************************************
00:21:34.098  END TEST nvmf_identify
00:21:34.098  ************************************
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:21:34.098  ************************************
00:21:34.098  START TEST nvmf_perf
00:21:34.098  ************************************
00:21:34.098   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp
00:21:34.383  * Looking for test storage...
00:21:34.383  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-:
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-:
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:34.383  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.383  		--rc genhtml_branch_coverage=1
00:21:34.383  		--rc genhtml_function_coverage=1
00:21:34.383  		--rc genhtml_legend=1
00:21:34.383  		--rc geninfo_all_blocks=1
00:21:34.383  		--rc geninfo_unexecuted_blocks=1
00:21:34.383  		
00:21:34.383  		'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:34.383  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.383  		--rc genhtml_branch_coverage=1
00:21:34.383  		--rc genhtml_function_coverage=1
00:21:34.383  		--rc genhtml_legend=1
00:21:34.383  		--rc geninfo_all_blocks=1
00:21:34.383  		--rc geninfo_unexecuted_blocks=1
00:21:34.383  		
00:21:34.383  		'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:34.383  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.383  		--rc genhtml_branch_coverage=1
00:21:34.383  		--rc genhtml_function_coverage=1
00:21:34.383  		--rc genhtml_legend=1
00:21:34.383  		--rc geninfo_all_blocks=1
00:21:34.383  		--rc geninfo_unexecuted_blocks=1
00:21:34.383  		
00:21:34.383  		'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:34.383  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.383  		--rc genhtml_branch_coverage=1
00:21:34.383  		--rc genhtml_function_coverage=1
00:21:34.383  		--rc genhtml_legend=1
00:21:34.383  		--rc geninfo_all_blocks=1
00:21:34.383  		--rc geninfo_unexecuted_blocks=1
00:21:34.383  		
00:21:34.383  		'
00:21:34.383   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:34.383     04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:34.383      04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:34.383      04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:34.383      04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:34.383      04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH
00:21:34.383      04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:34.383  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:34.383    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:34.384    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:34.384    04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable
00:21:34.384   04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=()
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=()
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=()
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=()
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=()
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=()
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:21:36.639  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:21:36.639  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:21:36.639  Found net devices under 0000:0a:00.0: cvl_0_0
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:21:36.639  Found net devices under 0000:0a:00.1: cvl_0_1
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:21:36.639   04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:21:36.639   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:21:36.639   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:21:36.639   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:21:36.640  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:36.640  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms
00:21:36.640  
00:21:36.640  --- 10.0.0.2 ping statistics ---
00:21:36.640  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:36.640  rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:21:36.640  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:36.640  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms
00:21:36.640  
00:21:36.640  --- 10.0.0.1 ping statistics ---
00:21:36.640  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:36.640  rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=294298
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 294298
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 294298 ']'
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:36.640  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:36.640   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:21:36.896  [2024-12-09 04:12:05.262597] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:36.896  [2024-12-09 04:12:05.262666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:36.896  [2024-12-09 04:12:05.334949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:21:36.896  [2024-12-09 04:12:05.398066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:36.896  [2024-12-09 04:12:05.398147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:36.896  [2024-12-09 04:12:05.398162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:36.896  [2024-12-09 04:12:05.398173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:36.896  [2024-12-09 04:12:05.398183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:36.896  [2024-12-09 04:12:05.399887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:36.896  [2024-12-09 04:12:05.399917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:21:36.896  [2024-12-09 04:12:05.399945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:21:36.896  [2024-12-09 04:12:05.399948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:21:37.154   04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config
00:21:40.430    04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev
00:21:40.430    04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr'
00:21:40.430   04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0
00:21:40.430    04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:21:40.995   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0'
00:21:40.995   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']'
00:21:40.995   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1'
00:21:40.995   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']'
00:21:40.995   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:21:40.995  [2024-12-09 04:12:09.538652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:40.995   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:21:41.253   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:21:41.253   04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:21:41.819   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:21:41.819   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:21:41.819   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:42.076  [2024-12-09 04:12:10.626722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:42.077   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:21:42.335   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']'
00:21:42.335   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0'
00:21:42.335   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']'
00:21:42.335   04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0'
00:21:43.707  Initializing NVMe Controllers
00:21:43.707  Attached to NVMe Controller at 0000:88:00.0 [8086:0a54]
00:21:43.707  Associating PCIE (0000:88:00.0) NSID 1 with lcore 0
00:21:43.707  Initialization complete. Launching workers.
00:21:43.707  ========================================================
00:21:43.707                                                                             Latency(us)
00:21:43.707  Device Information                     :       IOPS      MiB/s    Average        min        max
00:21:43.707  PCIE (0000:88:00.0) NSID 1 from core  0:   85336.30     333.34     374.41      38.59    5291.06
00:21:43.707  ========================================================
00:21:43.707  Total                                  :   85336.30     333.34     374.41      38.59    5291.06
00:21:43.707  
00:21:43.707   04:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:21:45.080  Initializing NVMe Controllers
00:21:45.080  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:45.080  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:21:45.080  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:21:45.080  Initialization complete. Launching workers.
00:21:45.080  ========================================================
00:21:45.080                                                                                                               Latency(us)
00:21:45.080  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:21:45.080  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     102.00       0.40   10161.17     149.18   46025.81
00:21:45.080  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:      40.00       0.16   25910.70    7948.17   47908.22
00:21:45.080  ========================================================
00:21:45.080  Total                                                                    :     142.00       0.55   14597.66     149.18   47908.22
00:21:45.080  
00:21:45.080   04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:21:46.453  Initializing NVMe Controllers
00:21:46.453  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:46.453  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:21:46.453  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:21:46.453  Initialization complete. Launching workers.
00:21:46.453  ========================================================
00:21:46.453                                                                                                               Latency(us)
00:21:46.453  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:21:46.453  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    7944.00      31.03    4030.29     662.58   10744.78
00:21:46.453  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    3711.00      14.50    8668.30    5154.15   18983.63
00:21:46.453  ========================================================
00:21:46.453  Total                                                                    :   11655.00      45.53    5507.05     662.58   18983.63
00:21:46.453  
00:21:46.453   04:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]]
00:21:46.453   04:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]]
00:21:46.453   04:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:21:48.981  Initializing NVMe Controllers
00:21:48.981  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:48.981  Controller IO queue size 128, less than required.
00:21:48.981  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:21:48.981  Controller IO queue size 128, less than required.
00:21:48.981  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:21:48.981  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:21:48.981  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:21:48.981  Initialization complete. Launching workers.
00:21:48.981  ========================================================
00:21:48.981                                                                                                               Latency(us)
00:21:48.981  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:21:48.981  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1720.41     430.10   75282.08   53986.19  127395.59
00:21:48.981  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:     543.47     135.87  245374.39   97098.29  390437.69
00:21:48.981  ========================================================
00:21:48.981  Total                                                                    :    2263.88     565.97  116114.75   53986.19  390437.69
00:21:48.981  
00:21:48.981   04:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4
00:21:49.238  No valid NVMe controllers or AIO or URING devices found
00:21:49.238  Initializing NVMe Controllers
00:21:49.238  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:49.238  Controller IO queue size 128, less than required.
00:21:49.238  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:21:49.238  WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test
00:21:49.238  Controller IO queue size 128, less than required.
00:21:49.238  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:21:49.238  WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test
00:21:49.238  WARNING: Some requested NVMe devices were skipped
00:21:49.238   04:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat
00:21:51.779  Initializing NVMe Controllers
00:21:51.780  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:51.780  Controller IO queue size 128, less than required.
00:21:51.780  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:21:51.780  Controller IO queue size 128, less than required.
00:21:51.780  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:21:51.780  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:21:51.780  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:21:51.780  Initialization complete. Launching workers.
00:21:51.780  
00:21:51.780  ====================
00:21:51.780  lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics:
00:21:51.780  TCP transport:
00:21:51.780  	polls:              10029
00:21:51.780  	idle_polls:         6645
00:21:51.780  	sock_completions:   3384
00:21:51.780  	nvme_completions:   6125
00:21:51.780  	submitted_requests: 9114
00:21:51.780  	queued_requests:    1
00:21:51.780  
00:21:51.780  ====================
00:21:51.780  lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics:
00:21:51.780  TCP transport:
00:21:51.780  	polls:              10131
00:21:51.780  	idle_polls:         6828
00:21:51.780  	sock_completions:   3303
00:21:51.780  	nvme_completions:   5943
00:21:51.780  	submitted_requests: 8932
00:21:51.780  	queued_requests:    1
00:21:51.780  ========================================================
00:21:51.780                                                                                                               Latency(us)
00:21:51.780  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:21:51.780  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1529.39     382.35   85812.46   57883.57  150710.40
00:21:51.780  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    1483.94     370.98   87339.88   41997.74  137577.14
00:21:51.780  ========================================================
00:21:51.780  Total                                                                    :    3013.32     753.33   86564.65   41997.74  150710.40
00:21:51.780  
00:21:52.037   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync
00:21:52.037   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']'
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:52.295  rmmod nvme_tcp
00:21:52.295  rmmod nvme_fabrics
00:21:52.295  rmmod nvme_keyring
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 294298 ']'
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 294298
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 294298 ']'
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 294298
00:21:52.295    04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:52.295    04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294298
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294298'
00:21:52.295  killing process with pid 294298
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 294298
00:21:52.295   04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 294298
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:54.196   04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:54.196    04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:21:56.104  
00:21:56.104  real	0m21.783s
00:21:56.104  user	1m6.398s
00:21:56.104  sys	0m5.816s
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:21:56.104  ************************************
00:21:56.104  END TEST nvmf_perf
00:21:56.104  ************************************
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:21:56.104  ************************************
00:21:56.104  START TEST nvmf_fio_host
00:21:56.104  ************************************
00:21:56.104   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp
00:21:56.104  * Looking for test storage...
00:21:56.104  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-:
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-:
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<'
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:56.104     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:56.104  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:56.104  		--rc genhtml_branch_coverage=1
00:21:56.104  		--rc genhtml_function_coverage=1
00:21:56.104  		--rc genhtml_legend=1
00:21:56.104  		--rc geninfo_all_blocks=1
00:21:56.104  		--rc geninfo_unexecuted_blocks=1
00:21:56.104  		
00:21:56.104  		'
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:56.104  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:56.104  		--rc genhtml_branch_coverage=1
00:21:56.104  		--rc genhtml_function_coverage=1
00:21:56.104  		--rc genhtml_legend=1
00:21:56.104  		--rc geninfo_all_blocks=1
00:21:56.104  		--rc geninfo_unexecuted_blocks=1
00:21:56.104  		
00:21:56.104  		'
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:56.104  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:56.104  		--rc genhtml_branch_coverage=1
00:21:56.104  		--rc genhtml_function_coverage=1
00:21:56.104  		--rc genhtml_legend=1
00:21:56.104  		--rc geninfo_all_blocks=1
00:21:56.104  		--rc geninfo_unexecuted_blocks=1
00:21:56.104  		
00:21:56.104  		'
00:21:56.104    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:56.104  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:56.104  		--rc genhtml_branch_coverage=1
00:21:56.104  		--rc genhtml_function_coverage=1
00:21:56.104  		--rc genhtml_legend=1
00:21:56.104  		--rc geninfo_all_blocks=1
00:21:56.104  		--rc geninfo_unexecuted_blocks=1
00:21:56.104  		
00:21:56.104  		'
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:56.105     04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:56.105      04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105      04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105      04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105      04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:21:56.105      04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:56.105  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:56.105    04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable
00:21:56.105   04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=()
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=()
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=()
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=()
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=()
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=()
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=()
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:21:58.664  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:21:58.664  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:21:58.664   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:21:58.665  Found net devices under 0000:0a:00.0: cvl_0_0
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:21:58.665  Found net devices under 0000:0a:00.1: cvl_0_1
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:21:58.665  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:58.665  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms
00:21:58.665  
00:21:58.665  --- 10.0.0.2 ping statistics ---
00:21:58.665  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:58.665  rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:21:58.665  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:58.665  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms
00:21:58.665  
00:21:58.665  --- 10.0.0.1 ping statistics ---
00:21:58.665  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:58.665  rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]]
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=298779
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 298779
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 298779 ']'
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:58.665  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:58.665   04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:21:58.665  [2024-12-09 04:12:26.988758] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:21:58.665  [2024-12-09 04:12:26.988852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:58.665  [2024-12-09 04:12:27.060991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:21:58.665  [2024-12-09 04:12:27.118674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:58.665  [2024-12-09 04:12:27.118727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:58.665  [2024-12-09 04:12:27.118755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:58.665  [2024-12-09 04:12:27.118766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:58.665  [2024-12-09 04:12:27.118775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:58.665  [2024-12-09 04:12:27.120454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:58.665  [2024-12-09 04:12:27.120511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:21:58.665  [2024-12-09 04:12:27.120583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:21:58.665  [2024-12-09 04:12:27.120587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:58.665   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:58.665   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0
00:21:58.665   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:21:58.923  [2024-12-09 04:12:27.499490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:59.180   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt
00:21:59.180   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:59.180   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:21:59.180   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:21:59.437  Malloc1
00:21:59.437   04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:21:59.695   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:21:59.952   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:22:00.208  [2024-12-09 04:12:28.636517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:00.208   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:22:00.465    04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:22:00.465    04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:22:00.465    04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:22:00.465   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:22:00.466   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:22:00.466   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:22:00.466    04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:22:00.466    04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:22:00.466    04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:22:00.466   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:22:00.466   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:22:00.466   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme'
00:22:00.466   04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:22:00.722  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:22:00.722  fio-3.35
00:22:00.722  Starting 1 thread
00:22:03.249  
00:22:03.249  test: (groupid=0, jobs=1): err= 0: pid=299141: Mon Dec  9 04:12:31 2024
00:22:03.249    read: IOPS=8763, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2006msec)
00:22:03.249      slat (nsec): min=1948, max=223784, avg=2542.58, stdev=2248.04
00:22:03.249      clat (usec): min=2759, max=14211, avg=7965.60, stdev=681.63
00:22:03.249       lat (usec): min=2790, max=14213, avg=7968.14, stdev=681.49
00:22:03.249      clat percentiles (usec):
00:22:03.249       |  1.00th=[ 6456],  5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7439],
00:22:03.249       | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160],
00:22:03.249       | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979],
00:22:03.249       | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[13042], 99.95th=[13960],
00:22:03.249       | 99.99th=[14222]
00:22:03.249     bw (  KiB/s): min=33824, max=35672, per=99.94%, avg=35032.00, stdev=830.54, samples=4
00:22:03.249     iops        : min= 8456, max= 8918, avg=8758.00, stdev=207.63, samples=4
00:22:03.249    write: IOPS=8770, BW=34.3MiB/s (35.9MB/s)(68.7MiB/2006msec); 0 zone resets
00:22:03.249      slat (usec): min=2, max=200, avg= 2.71, stdev= 1.89
00:22:03.249      clat (usec): min=1782, max=12422, avg=6571.45, stdev=559.19
00:22:03.249       lat (usec): min=1791, max=12424, avg=6574.16, stdev=559.11
00:22:03.249      clat percentiles (usec):
00:22:03.249       |  1.00th=[ 5342],  5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128],
00:22:03.249       | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718],
00:22:03.249       | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373],
00:22:03.249       | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[10814], 99.95th=[11600],
00:22:03.249       | 99.99th=[12387]
00:22:03.249     bw (  KiB/s): min=34688, max=35456, per=99.94%, avg=35060.00, stdev=360.00, samples=4
00:22:03.249     iops        : min= 8672, max= 8864, avg=8765.00, stdev=90.00, samples=4
00:22:03.249    lat (msec)   : 2=0.01%, 4=0.13%, 10=99.63%, 20=0.23%
00:22:03.249    cpu          : usr=63.69%, sys=34.76%, ctx=82, majf=0, minf=35
00:22:03.249    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:22:03.249       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:03.249       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:03.249       issued rwts: total=17579,17594,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:03.249       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:03.249  
00:22:03.249  Run status group 0 (all jobs):
00:22:03.249     READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2006-2006msec
00:22:03.249    WRITE: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.1MB), run=2006-2006msec
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:22:03.249    04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:22:03.249    04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:22:03.249    04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:22:03.249    04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:22:03.249    04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:22:03.249    04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme'
00:22:03.249   04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:22:03.249  test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128
00:22:03.249  fio-3.35
00:22:03.249  Starting 1 thread
00:22:05.779  
00:22:05.779  test: (groupid=0, jobs=1): err= 0: pid=299593: Mon Dec  9 04:12:34 2024
00:22:05.779    read: IOPS=7817, BW=122MiB/s (128MB/s)(246MiB/2010msec)
00:22:05.779      slat (usec): min=2, max=106, avg= 3.78, stdev= 2.00
00:22:05.779      clat (usec): min=2589, max=17475, avg=9239.99, stdev=2301.00
00:22:05.779       lat (usec): min=2594, max=17478, avg=9243.76, stdev=2301.03
00:22:05.779      clat percentiles (usec):
00:22:05.779       |  1.00th=[ 4883],  5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7308],
00:22:05.779       | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634],
00:22:05.779       | 70.00th=[10290], 80.00th=[11076], 90.00th=[12256], 95.00th=[13566],
00:22:05.779       | 99.00th=[15533], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171],
00:22:05.779       | 99.99th=[17433]
00:22:05.779     bw (  KiB/s): min=62208, max=70400, per=52.74%, avg=65968.00, stdev=3869.53, samples=4
00:22:05.779     iops        : min= 3888, max= 4400, avg=4123.00, stdev=241.85, samples=4
00:22:05.779    write: IOPS=4569, BW=71.4MiB/s (74.9MB/s)(134MiB/1879msec); 0 zone resets
00:22:05.779      slat (usec): min=30, max=145, avg=33.86, stdev= 5.63
00:22:05.779      clat (usec): min=5836, max=20594, avg=12278.94, stdev=2237.77
00:22:05.779       lat (usec): min=5872, max=20625, avg=12312.80, stdev=2237.85
00:22:05.779      clat percentiles (usec):
00:22:05.779       |  1.00th=[ 7963],  5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290],
00:22:05.779       | 30.00th=[10945], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780],
00:22:05.779       | 70.00th=[13566], 80.00th=[14222], 90.00th=[15270], 95.00th=[16188],
00:22:05.779       | 99.00th=[17957], 99.50th=[18482], 99.90th=[19792], 99.95th=[20055],
00:22:05.779       | 99.99th=[20579]
00:22:05.779     bw (  KiB/s): min=64896, max=71296, per=93.18%, avg=68136.00, stdev=3404.19, samples=4
00:22:05.779     iops        : min= 4056, max= 4456, avg=4258.50, stdev=212.76, samples=4
00:22:05.779    lat (msec)   : 4=0.16%, 10=47.98%, 20=51.83%, 50=0.02%
00:22:05.779    cpu          : usr=75.16%, sys=23.49%, ctx=45, majf=0, minf=58
00:22:05.779    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:22:05.779       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:05.779       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:05.779       issued rwts: total=15713,8587,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:05.779       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:05.779  
00:22:05.779  Run status group 0 (all jobs):
00:22:05.779     READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=246MiB (257MB), run=2010-2010msec
00:22:05.779    WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=134MiB (141MB), run=1879-1879msec
00:22:05.779   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']'
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:06.037  rmmod nvme_tcp
00:22:06.037  rmmod nvme_fabrics
00:22:06.037  rmmod nvme_keyring
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 298779 ']'
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 298779
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 298779 ']'
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 298779
00:22:06.037    04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname
00:22:06.037   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:06.038    04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298779
00:22:06.038   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:06.038   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:06.038   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298779'
00:22:06.038  killing process with pid 298779
00:22:06.038   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 298779
00:22:06.038   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 298779
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:06.297   04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:06.297    04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:22:08.824  
00:22:08.824  real	0m12.400s
00:22:08.824  user	0m36.415s
00:22:08.824  sys	0m4.166s
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:22:08.824  ************************************
00:22:08.824  END TEST nvmf_fio_host
00:22:08.824  ************************************
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:22:08.824  ************************************
00:22:08.824  START TEST nvmf_failover
00:22:08.824  ************************************
00:22:08.824   04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp
00:22:08.824  * Looking for test storage...
00:22:08.824  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:22:08.824    04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:08.824     04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version
00:22:08.824     04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-:
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-:
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<'
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:08.824  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:08.824  		--rc genhtml_branch_coverage=1
00:22:08.824  		--rc genhtml_function_coverage=1
00:22:08.824  		--rc genhtml_legend=1
00:22:08.824  		--rc geninfo_all_blocks=1
00:22:08.824  		--rc geninfo_unexecuted_blocks=1
00:22:08.824  		
00:22:08.824  		'
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:08.824  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:08.824  		--rc genhtml_branch_coverage=1
00:22:08.824  		--rc genhtml_function_coverage=1
00:22:08.824  		--rc genhtml_legend=1
00:22:08.824  		--rc geninfo_all_blocks=1
00:22:08.824  		--rc geninfo_unexecuted_blocks=1
00:22:08.824  		
00:22:08.824  		'
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:08.824  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:08.824  		--rc genhtml_branch_coverage=1
00:22:08.824  		--rc genhtml_function_coverage=1
00:22:08.824  		--rc genhtml_legend=1
00:22:08.824  		--rc geninfo_all_blocks=1
00:22:08.824  		--rc geninfo_unexecuted_blocks=1
00:22:08.824  		
00:22:08.824  		'
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:08.824  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:08.824  		--rc genhtml_branch_coverage=1
00:22:08.824  		--rc genhtml_function_coverage=1
00:22:08.824  		--rc genhtml_legend=1
00:22:08.824  		--rc geninfo_all_blocks=1
00:22:08.824  		--rc geninfo_unexecuted_blocks=1
00:22:08.824  		
00:22:08.824  		'
00:22:08.824   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:22:08.824     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s
00:22:08.824    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:08.825     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:22:08.825     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob
00:22:08.825     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:08.825     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:08.825     04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:08.825      04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:08.825      04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:08.825      04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:08.825      04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH
00:22:08.825      04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:08.825  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:08.825    04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable
00:22:08.825   04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=()
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=()
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=()
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=()
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=()
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=()
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=()
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:22:10.723  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:22:10.723  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:22:10.723  Found net devices under 0000:0a:00.0: cvl_0_0
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:22:10.723  Found net devices under 0000:0a:00.1: cvl_0_1
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:22:10.723   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:22:10.982  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:10.982  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms
00:22:10.982  
00:22:10.982  --- 10.0.0.2 ping statistics ---
00:22:10.982  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:10.982  rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:22:10.982  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:10.982  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms
00:22:10.982  
00:22:10.982  --- 10.0.0.1 ping statistics ---
00:22:10.982  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:10.982  rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=301801
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 301801
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 301801 ']'
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:10.982  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:10.982   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:22:10.982  [2024-12-09 04:12:39.467938] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:22:10.982  [2024-12-09 04:12:39.468015] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:10.982  [2024-12-09 04:12:39.539929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:22:11.241  [2024-12-09 04:12:39.598384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:11.241  [2024-12-09 04:12:39.598441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:11.241  [2024-12-09 04:12:39.598455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:11.241  [2024-12-09 04:12:39.598466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:11.241  [2024-12-09 04:12:39.598476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:11.241  [2024-12-09 04:12:39.599972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:22:11.241  [2024-12-09 04:12:39.600040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:22:11.241  [2024-12-09 04:12:39.600043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:22:11.241   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:11.241   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:22:11.241   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:11.241   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:11.241   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:22:11.241   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:11.241   04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:22:11.499  [2024-12-09 04:12:40.046704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:11.499   04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:22:12.065  Malloc0
00:22:12.065   04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:22:12.323   04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:22:12.581   04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:22:12.838  [2024-12-09 04:12:41.242433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:12.838   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:22:13.096  [2024-12-09 04:12:41.511186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:22:13.096   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:22:13.355  [2024-12-09 04:12:41.836108] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 ***
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=302091
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 302091 /var/tmp/bdevperf.sock
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 302091 ']'
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:13.355  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:13.355   04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:22:13.613   04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:13.613   04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:22:13.613   04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:22:14.179  NVMe0n1
00:22:14.179   04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:22:14.437  
00:22:14.437   04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=302224
00:22:14.437   04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:22:14.437   04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1
00:22:15.371   04:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:22:15.630  [2024-12-09 04:12:44.167954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630  [2024-12-09 04:12:44.168857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set
00:22:15.630   04:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3
00:22:18.909   04:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:22:19.167  
00:22:19.167   04:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:22:19.432  [2024-12-09 04:12:47.866544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.866996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.867007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.867018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432  [2024-12-09 04:12:47.867028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set
00:22:19.432   04:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3
00:22:22.715   04:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:22:22.715  [2024-12-09 04:12:51.151157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:22.715   04:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1
00:22:23.650   04:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:22:24.216   04:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 302224
00:22:29.479  {
00:22:29.479    "results": [
00:22:29.479      {
00:22:29.479        "job": "NVMe0n1",
00:22:29.479        "core_mask": "0x1",
00:22:29.479        "workload": "verify",
00:22:29.479        "status": "finished",
00:22:29.479        "verify_range": {
00:22:29.479          "start": 0,
00:22:29.479          "length": 16384
00:22:29.479        },
00:22:29.479        "queue_depth": 128,
00:22:29.479        "io_size": 4096,
00:22:29.479        "runtime": 15.042254,
00:22:29.479        "iops": 8276.818088565717,
00:22:29.479        "mibps": 32.33132065845983,
00:22:29.479        "io_failed": 10301,
00:22:29.479        "io_timeout": 0,
00:22:29.479        "avg_latency_us": 14218.11726537573,
00:22:29.479        "min_latency_us": 570.4059259259259,
00:22:29.479        "max_latency_us": 43690.666666666664
00:22:29.479      }
00:22:29.479    ],
00:22:29.479    "core_count": 1
00:22:29.479  }
00:22:29.479   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 302091
00:22:29.479   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 302091 ']'
00:22:29.479   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 302091
00:22:29.479    04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:22:29.736   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:29.736    04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302091
00:22:29.736   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:29.736   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:29.736   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302091'
00:22:29.736  killing process with pid 302091
00:22:29.736   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 302091
00:22:29.737   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 302091
00:22:30.006   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:22:30.006  [2024-12-09 04:12:41.903939] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:22:30.006  [2024-12-09 04:12:41.904040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302091 ]
00:22:30.006  [2024-12-09 04:12:41.977639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:30.006  [2024-12-09 04:12:42.036205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:30.006  Running I/O for 15 seconds...
00:22:30.006       8590.00 IOPS,    33.55 MiB/s
[2024-12-09T03:12:58.582Z] [2024-12-09 04:12:44.169770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.169809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.169835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.169850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.169867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.169881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.169897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.169910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.169925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.169938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.169953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.169967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.169982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.169996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.006  [2024-12-09 04:12:44.170799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.006  [2024-12-09 04:12:44.170820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.170834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.170848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.170861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.170875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.170888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.170907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.170921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.170935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.170962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.170975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.170990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.171648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.171959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.171988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.007  [2024-12-09 04:12:44.172167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.007  [2024-12-09 04:12:44.172953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.007  [2024-12-09 04:12:44.172968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.172981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.172996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.173016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.173044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.173072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.173101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.173129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.173157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.008  [2024-12-09 04:12:44.173186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84016 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84024 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84032 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84040 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84048 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84056 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84064 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84072 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84080 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84088 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84096 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84104 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84112 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.173940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84120 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.173952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.173966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.173989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.174000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84128 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.174012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.174036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.174055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84136 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.174068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.174091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.174103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84144 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.174116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.174143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.174154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84152 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.174167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.008  [2024-12-09 04:12:44.174191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.008  [2024-12-09 04:12:44.174202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84160 len:8 PRP1 0x0 PRP2 0x0
00:22:30.008  [2024-12-09 04:12:44.174219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174315] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421
00:22:30.008  [2024-12-09 04:12:44.174356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.008  [2024-12-09 04:12:44.174375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.008  [2024-12-09 04:12:44.174404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.008  [2024-12-09 04:12:44.174431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.008  [2024-12-09 04:12:44.174459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:44.174472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:22:30.008  [2024-12-09 04:12:44.174523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf9180 (9): Bad file descriptor
00:22:30.008  [2024-12-09 04:12:44.177935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:22:30.008  [2024-12-09 04:12:44.327739] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful.
00:22:30.008       7898.50 IOPS,    30.85 MiB/s
[2024-12-09T03:12:58.584Z]      8184.33 IOPS,    31.97 MiB/s
[2024-12-09T03:12:58.584Z]      8283.25 IOPS,    32.36 MiB/s
[2024-12-09T03:12:58.584Z] [2024-12-09 04:12:47.867462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.867973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.867987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.868006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.868019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.868034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.868048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.868063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.868077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.868091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.868104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.868119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.008  [2024-12-09 04:12:47.868132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.008  [2024-12-09 04:12:47.868146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.868975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.868988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.009  [2024-12-09 04:12:47.869818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.009  [2024-12-09 04:12:47.869847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.009  [2024-12-09 04:12:47.869890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.009  [2024-12-09 04:12:47.869919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.009  [2024-12-09 04:12:47.869949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.009  [2024-12-09 04:12:47.869964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.869993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.870979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.870993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:47.871403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.010  [2024-12-09 04:12:47.871465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.010  [2024-12-09 04:12:47.871477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113632 len:8 PRP1 0x0 PRP2 0x0
00:22:30.010  [2024-12-09 04:12:47.871491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871556] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422
00:22:30.010  [2024-12-09 04:12:47.871617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.010  [2024-12-09 04:12:47.871636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.010  [2024-12-09 04:12:47.871680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.010  [2024-12-09 04:12:47.871708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.010  [2024-12-09 04:12:47.871736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:47.871754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:22:30.010  [2024-12-09 04:12:47.871818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf9180 (9): Bad file descriptor
00:22:30.010  [2024-12-09 04:12:47.875188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:22:30.010  [2024-12-09 04:12:47.906417] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful.
00:22:30.010       8252.00 IOPS,    32.23 MiB/s
[2024-12-09T03:12:58.586Z]      8270.00 IOPS,    32.30 MiB/s
[2024-12-09T03:12:58.586Z]      8307.71 IOPS,    32.45 MiB/s
[2024-12-09T03:12:58.586Z]      8345.88 IOPS,    32.60 MiB/s
[2024-12-09T03:12:58.586Z]      8376.00 IOPS,    32.72 MiB/s
[2024-12-09T03:12:58.586Z] [2024-12-09 04:12:52.477937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.010  [2024-12-09 04:12:52.478474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.010  [2024-12-09 04:12:52.478488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.478978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.478991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.011  [2024-12-09 04:12:52.479833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.479883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.479912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.479941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.479969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.479984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.479998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.011  [2024-12-09 04:12:52.480453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.011  [2024-12-09 04:12:52.480467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.480592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.480876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.480905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.480963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.480978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.480992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.481021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.481055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:22:30.012  [2024-12-09 04:12:52.481565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.481975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.481989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.482005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:30.012  [2024-12-09 04:12:52.482019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.482034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1c010 is same with the state(6) to be set
00:22:30.012  [2024-12-09 04:12:52.482051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:22:30.012  [2024-12-09 04:12:52.482063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:22:30.012  [2024-12-09 04:12:52.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41528 len:8 PRP1 0x0 PRP2 0x0
00:22:30.012  [2024-12-09 04:12:52.482088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.482151] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420
00:22:30.012  [2024-12-09 04:12:52.482191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.012  [2024-12-09 04:12:52.482209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.482225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.012  [2024-12-09 04:12:52.482250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.482285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.012  [2024-12-09 04:12:52.482301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.482320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:30.012  [2024-12-09 04:12:52.482334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:30.012  [2024-12-09 04:12:52.482348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:22:30.012  [2024-12-09 04:12:52.482405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf9180 (9): Bad file descriptor
00:22:30.012  [2024-12-09 04:12:52.485733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:22:30.012  [2024-12-09 04:12:52.555053] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful.
00:22:30.012       8317.30 IOPS,    32.49 MiB/s
[2024-12-09T03:12:58.588Z]      8311.18 IOPS,    32.47 MiB/s
[2024-12-09T03:12:58.588Z]      8310.83 IOPS,    32.46 MiB/s
[2024-12-09T03:12:58.588Z]      8305.85 IOPS,    32.44 MiB/s
[2024-12-09T03:12:58.588Z]      8302.29 IOPS,    32.43 MiB/s
[2024-12-09T03:12:58.588Z]      8299.93 IOPS,    32.42 MiB/s
00:22:30.012                                                                                                  Latency(us)
00:22:30.012  
[2024-12-09T03:12:58.588Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:30.012  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:22:30.012  	 Verification LBA range: start 0x0 length 0x4000
00:22:30.012  	 NVMe0n1             :      15.04    8276.82      32.33     684.80     0.00   14218.12     570.41   43690.67
00:22:30.012  
[2024-12-09T03:12:58.588Z]  ===================================================================================================================
00:22:30.012  
[2024-12-09T03:12:58.588Z]  Total                       :               8276.82      32.33     684.80     0.00   14218.12     570.41   43690.67
00:22:30.012  Received shutdown signal, test time was about 15.000000 seconds
00:22:30.012  
00:22:30.012                                                                                                  Latency(us)
00:22:30.012  
[2024-12-09T03:12:58.588Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:30.012  
[2024-12-09T03:12:58.588Z]  ===================================================================================================================
00:22:30.012  
[2024-12-09T03:12:58.588Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:30.012    04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful'
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 ))
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=304067
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 304067 /var/tmp/bdevperf.sock
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 304067 ']'
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:30.012  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:30.012   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:22:30.270   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:30.270   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:22:30.270   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:22:30.526  [2024-12-09 04:12:58.857038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:22:30.526   04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:22:30.783  [2024-12-09 04:12:59.121745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 ***
00:22:30.783   04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:22:31.347  NVMe0n1
00:22:31.347   04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:22:31.604  
00:22:31.605   04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:22:32.168  
00:22:32.169   04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:32.169   04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0
00:22:32.426   04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:22:32.686   04:13:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3
00:22:35.966   04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:35.966   04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0
00:22:35.966   04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=304737
00:22:35.966   04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:22:35.966   04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 304737
00:22:36.900  {
00:22:36.900    "results": [
00:22:36.900      {
00:22:36.900        "job": "NVMe0n1",
00:22:36.900        "core_mask": "0x1",
00:22:36.900        "workload": "verify",
00:22:36.900        "status": "finished",
00:22:36.900        "verify_range": {
00:22:36.900          "start": 0,
00:22:36.900          "length": 16384
00:22:36.900        },
00:22:36.900        "queue_depth": 128,
00:22:36.900        "io_size": 4096,
00:22:36.900        "runtime": 1.012367,
00:22:36.900        "iops": 8487.040766836533,
00:22:36.900        "mibps": 33.15250299545521,
00:22:36.900        "io_failed": 0,
00:22:36.900        "io_timeout": 0,
00:22:36.900        "avg_latency_us": 14983.00993999586,
00:22:36.900        "min_latency_us": 1953.9437037037037,
00:22:36.900        "max_latency_us": 14854.826666666666
00:22:36.900      }
00:22:36.900    ],
00:22:36.900    "core_count": 1
00:22:36.900  }
00:22:36.900   04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:22:36.900  [2024-12-09 04:12:58.365050] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:22:36.900  [2024-12-09 04:12:58.365147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304067 ]
00:22:36.900  [2024-12-09 04:12:58.433880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:36.900  [2024-12-09 04:12:58.490322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:36.900  [2024-12-09 04:13:01.014247] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421
00:22:36.900  [2024-12-09 04:13:01.014374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:36.900  [2024-12-09 04:13:01.014398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:36.900  [2024-12-09 04:13:01.014417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:36.900  [2024-12-09 04:13:01.014430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:36.900  [2024-12-09 04:13:01.014444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:36.900  [2024-12-09 04:13:01.014458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:36.900  [2024-12-09 04:13:01.014473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:36.900  [2024-12-09 04:13:01.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:36.900  [2024-12-09 04:13:01.014501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state.
00:22:36.900  [2024-12-09 04:13:01.014551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller
00:22:36.900  [2024-12-09 04:13:01.014585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa180 (9): Bad file descriptor
00:22:36.900  [2024-12-09 04:13:01.060540] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful.
00:22:36.900  Running I/O for 1 seconds...
00:22:36.900       8400.00 IOPS,    32.81 MiB/s
00:22:36.900                                                                                                  Latency(us)
00:22:36.900  
[2024-12-09T03:13:05.476Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:36.900  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:22:36.900  	 Verification LBA range: start 0x0 length 0x4000
00:22:36.900  	 NVMe0n1             :       1.01    8487.04      33.15       0.00     0.00   14983.01    1953.94   14854.83
00:22:36.900  
[2024-12-09T03:13:05.476Z]  ===================================================================================================================
00:22:36.900  
[2024-12-09T03:13:05.476Z]  Total                       :               8487.04      33.15       0.00     0.00   14983.01    1953.94   14854.83
00:22:36.900   04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:36.900   04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0
00:22:37.157   04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:22:37.415   04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:37.415   04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0
00:22:37.672   04:13:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:22:38.236   04:13:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 304067
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 304067 ']'
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 304067
00:22:41.511    04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:41.511    04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304067
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304067'
00:22:41.511  killing process with pid 304067
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 304067
00:22:41.511   04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 304067
00:22:41.511   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync
00:22:41.511   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e
00:22:42.073   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:42.074  rmmod nvme_tcp
00:22:42.074  rmmod nvme_fabrics
00:22:42.074  rmmod nvme_keyring
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 301801 ']'
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 301801
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 301801 ']'
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 301801
00:22:42.074    04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:42.074    04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301801
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301801'
00:22:42.074  killing process with pid 301801
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 301801
00:22:42.074   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 301801
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:42.332   04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:42.332    04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:44.237   04:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:22:44.237  
00:22:44.237  real	0m35.880s
00:22:44.237  user	2m6.032s
00:22:44.237  sys	0m6.176s
00:22:44.237   04:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:44.237   04:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:22:44.237  ************************************
00:22:44.237  END TEST nvmf_failover
00:22:44.237  ************************************
00:22:44.496   04:13:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:22:44.496   04:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:44.496   04:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:44.496   04:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:22:44.496  ************************************
00:22:44.496  START TEST nvmf_host_discovery
00:22:44.496  ************************************
00:22:44.496   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:22:44.496  * Looking for test storage...
00:22:44.496  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:44.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:44.496  		--rc genhtml_branch_coverage=1
00:22:44.496  		--rc genhtml_function_coverage=1
00:22:44.496  		--rc genhtml_legend=1
00:22:44.496  		--rc geninfo_all_blocks=1
00:22:44.496  		--rc geninfo_unexecuted_blocks=1
00:22:44.496  		
00:22:44.496  		'
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:44.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:44.496  		--rc genhtml_branch_coverage=1
00:22:44.496  		--rc genhtml_function_coverage=1
00:22:44.496  		--rc genhtml_legend=1
00:22:44.496  		--rc geninfo_all_blocks=1
00:22:44.496  		--rc geninfo_unexecuted_blocks=1
00:22:44.496  		
00:22:44.496  		'
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:44.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:44.496  		--rc genhtml_branch_coverage=1
00:22:44.496  		--rc genhtml_function_coverage=1
00:22:44.496  		--rc genhtml_legend=1
00:22:44.496  		--rc geninfo_all_blocks=1
00:22:44.496  		--rc geninfo_unexecuted_blocks=1
00:22:44.496  		
00:22:44.496  		'
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:44.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:44.496  		--rc genhtml_branch_coverage=1
00:22:44.496  		--rc genhtml_function_coverage=1
00:22:44.496  		--rc genhtml_legend=1
00:22:44.496  		--rc geninfo_all_blocks=1
00:22:44.496  		--rc geninfo_unexecuted_blocks=1
00:22:44.496  		
00:22:44.496  		'
00:22:44.496   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:22:44.496     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:44.496    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:44.497     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:22:44.497     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:22:44.497     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:44.497     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:44.497     04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:44.497      04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:44.497      04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:44.497      04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:44.497      04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH
00:22:44.497      04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:44.497  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']'
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:44.497    04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable
00:22:44.497   04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:46.403   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:22:46.403   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=()
00:22:46.403   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs
00:22:46.403   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=()
00:22:46.403   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:22:46.403   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=()
00:22:46.403   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=()
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=()
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=()
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=()
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:22:46.404  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:22:46.404  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:22:46.404  Found net devices under 0000:0a:00.0: cvl_0_0
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:22:46.404  Found net devices under 0000:0a:00.1: cvl_0_1
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:22:46.404   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:22:46.662   04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:22:46.662   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:22:46.662   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:22:46.662   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:22:46.663  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:46.663  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms
00:22:46.663  
00:22:46.663  --- 10.0.0.2 ping statistics ---
00:22:46.663  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:46.663  rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:22:46.663  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:46.663  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms
00:22:46.663  
00:22:46.663  --- 10.0.0.1 ping statistics ---
00:22:46.663  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:46.663  rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=307466
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 307466
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 307466 ']'
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:46.663  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:46.663   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:46.921  [2024-12-09 04:13:15.263974] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:22:46.921  [2024-12-09 04:13:15.264065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:46.921  [2024-12-09 04:13:15.338208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:46.921  [2024-12-09 04:13:15.397098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:46.921  [2024-12-09 04:13:15.397170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:46.921  [2024-12-09 04:13:15.397184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:46.921  [2024-12-09 04:13:15.397196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:46.921  [2024-12-09 04:13:15.397205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:46.921  [2024-12-09 04:13:15.397798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.178  [2024-12-09 04:13:15.546189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.178  [2024-12-09 04:13:15.554446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.178  null0
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.178  null1
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=307491
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 307491 /tmp/host.sock
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 307491 ']'
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:22:47.178  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:47.178   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.178  [2024-12-09 04:13:15.628481] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:22:47.178  [2024-12-09 04:13:15.628572] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307491 ]
00:22:47.178  [2024-12-09 04:13:15.693793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:47.178  [2024-12-09 04:13:15.750540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.436   04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]]
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:47.436    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.437    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.437    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:47.437    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:47.437    04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.437   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]]
00:22:47.437   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0
00:22:47.437   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.437   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.437   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.695   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]]
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:47.695    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.695   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]]
00:22:47.695   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0
00:22:47.695   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.695   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]]
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]]
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.696  [2024-12-09 04:13:16.172011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]]
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]]
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:47.696   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:22:47.696    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:22:47.696     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:22:47.696     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:22:47.696     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.696     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.696     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.955    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:22:47.955    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0
00:22:47.955    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:47.955     04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.955    04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]]
00:22:47.955   04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:22:48.520  [2024-12-09 04:13:16.973386] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:22:48.520  [2024-12-09 04:13:16.973412] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:22:48.520  [2024-12-09 04:13:16.973436] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:48.520  [2024-12-09 04:13:17.060730] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0
00:22:48.777  [2024-12-09 04:13:17.243805] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420
00:22:48.777  [2024-12-09 04:13:17.244748] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1573aa0:1 started.
00:22:48.777  [2024-12-09 04:13:17.246525] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:22:48.777  [2024-12-09 04:13:17.246548] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:22:48.777  [2024-12-09 04:13:17.292989] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1573aa0 was disconnected and freed. delete nvme_qpair.
00:22:48.777   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:48.777   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:48.777     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.035    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]'
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]'
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]'
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.035    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]]
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]'
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]'
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]'
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.035    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]]
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.035   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:22:49.035    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:22:49.035     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.036  [2024-12-09 04:13:17.516359] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1542230:1 started.
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:49.036  [2024-12-09 04:13:17.523431] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1542230 was disconnected and freed. delete nvme_qpair.
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:22:49.036    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.036  [2024-12-09 04:13:17.600428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:22:49.036  [2024-12-09 04:13:17.600647] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:22:49.036  [2024-12-09 04:13:17.600678] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.036   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:49.036     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.293    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.293    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]'
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]'
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]'
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:22:49.293  [2024-12-09 04:13:17.686929] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0
00:22:49.293     04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.293    04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]]
00:22:49.293   04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:22:49.293  [2024-12-09 04:13:17.787875] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421
00:22:49.293  [2024-12-09 04:13:17.787930] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:22:49.293  [2024-12-09 04:13:17.787945] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:22:49.293  [2024-12-09 04:13:17.787953] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]'
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.223    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:50.223   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:22:50.223    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:50.223     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.482    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:22:50.482    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:22:50.482    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:50.482  [2024-12-09 04:13:18.812822] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:22:50.482  [2024-12-09 04:13:18.812861] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:22:50.482  [2024-12-09 04:13:18.817862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:50.482  [2024-12-09 04:13:18.817896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:22:50.482  [2024-12-09 04:13:18.817929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:50.482  [2024-12-09 04:13:18.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:50.482  [2024-12-09 04:13:18.817967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:50.482  [2024-12-09 04:13:18.817981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:50.482  [2024-12-09 04:13:18.817995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:50.482  [2024-12-09 04:13:18.818008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:50.482  [2024-12-09 04:13:18.818022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:50.482  [2024-12-09 04:13:18.827854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.482     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.482  [2024-12-09 04:13:18.837894] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.482  [2024-12-09 04:13:18.837915] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.482  [2024-12-09 04:13:18.837928] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.482  [2024-12-09 04:13:18.837937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.482  [2024-12-09 04:13:18.837969] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.482  [2024-12-09 04:13:18.838149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.482  [2024-12-09 04:13:18.838179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.482  [2024-12-09 04:13:18.838196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.482  [2024-12-09 04:13:18.838218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.482  [2024-12-09 04:13:18.838239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.482  [2024-12-09 04:13:18.838287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.482  [2024-12-09 04:13:18.838306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.482  [2024-12-09 04:13:18.838320] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.482  [2024-12-09 04:13:18.838330] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.482  [2024-12-09 04:13:18.838338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.482  [2024-12-09 04:13:18.848001] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.482  [2024-12-09 04:13:18.848021] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.482  [2024-12-09 04:13:18.848035] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.482  [2024-12-09 04:13:18.848042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.482  [2024-12-09 04:13:18.848067] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.482  [2024-12-09 04:13:18.848295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.482  [2024-12-09 04:13:18.848323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.482  [2024-12-09 04:13:18.848340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.482  [2024-12-09 04:13:18.848363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.482  [2024-12-09 04:13:18.848383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.482  [2024-12-09 04:13:18.848397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.482  [2024-12-09 04:13:18.848412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.482  [2024-12-09 04:13:18.848425] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.482  [2024-12-09 04:13:18.848434] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.482  [2024-12-09 04:13:18.848441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.482    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:50.482   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:50.483  [2024-12-09 04:13:18.858100] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.483  [2024-12-09 04:13:18.858123] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.483  [2024-12-09 04:13:18.858132] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.858139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.483  [2024-12-09 04:13:18.858165] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.858351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.483  [2024-12-09 04:13:18.858390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.483  [2024-12-09 04:13:18.858409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.483  [2024-12-09 04:13:18.858432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.483  [2024-12-09 04:13:18.858453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.483  [2024-12-09 04:13:18.858467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.483  [2024-12-09 04:13:18.858481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.483  [2024-12-09 04:13:18.858494] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.483  [2024-12-09 04:13:18.858503] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.483  [2024-12-09 04:13:18.858511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.483  [2024-12-09 04:13:18.868199] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.483  [2024-12-09 04:13:18.868222] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.483  [2024-12-09 04:13:18.868230] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.868237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.483  [2024-12-09 04:13:18.868284] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.868426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.483  [2024-12-09 04:13:18.868455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.483  [2024-12-09 04:13:18.868472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.483  [2024-12-09 04:13:18.868496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.483  [2024-12-09 04:13:18.868517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.483  [2024-12-09 04:13:18.868531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.483  [2024-12-09 04:13:18.868545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.483  [2024-12-09 04:13:18.868568] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.483  [2024-12-09 04:13:18.868577] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.483  [2024-12-09 04:13:18.868584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.483  [2024-12-09 04:13:18.878318] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.483  [2024-12-09 04:13:18.878339] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.483  [2024-12-09 04:13:18.878347] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.878355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.483  [2024-12-09 04:13:18.878379] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.878551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.483  [2024-12-09 04:13:18.878579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.483  [2024-12-09 04:13:18.878596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.483  [2024-12-09 04:13:18.878618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.483  [2024-12-09 04:13:18.878639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.483  [2024-12-09 04:13:18.878652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.483  [2024-12-09 04:13:18.878666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.483  [2024-12-09 04:13:18.878678] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.483  [2024-12-09 04:13:18.878687] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.483  [2024-12-09 04:13:18.878695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.483  [2024-12-09 04:13:18.888414] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.483  [2024-12-09 04:13:18.888434] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.483  [2024-12-09 04:13:18.888443] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.888450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.483  [2024-12-09 04:13:18.888474] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.888714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.483  [2024-12-09 04:13:18.888741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.483  [2024-12-09 04:13:18.888758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.483  [2024-12-09 04:13:18.888779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.483  [2024-12-09 04:13:18.888799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.483  [2024-12-09 04:13:18.888813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.483  [2024-12-09 04:13:18.888827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.483  [2024-12-09 04:13:18.888839] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.483  [2024-12-09 04:13:18.888863] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.483  [2024-12-09 04:13:18.888871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.483    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]'
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]'
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:50.483   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]'
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:22:50.483     04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:22:50.483  [2024-12-09 04:13:18.898509] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.483  [2024-12-09 04:13:18.898533] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.483  [2024-12-09 04:13:18.898542] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.898575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.483  [2024-12-09 04:13:18.898601] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.483  [2024-12-09 04:13:18.898790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.484  [2024-12-09 04:13:18.898818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.484  [2024-12-09 04:13:18.898836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.484  [2024-12-09 04:13:18.898858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.484  [2024-12-09 04:13:18.898879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.484  [2024-12-09 04:13:18.898893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.484  [2024-12-09 04:13:18.898908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.484  [2024-12-09 04:13:18.898922] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.484  [2024-12-09 04:13:18.898933] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.484  [2024-12-09 04:13:18.898943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.484  [2024-12-09 04:13:18.908636] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpa   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.484  irs for reset.
00:22:50.484  [2024-12-09 04:13:18.908678] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.484  [2024-12-09 04:13:18.908687] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.908694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.484  [2024-12-09 04:13:18.908718] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.908889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.484  [2024-12-09 04:13:18.908917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.484  [2024-12-09 04:13:18.908935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.484  [2024-12-09 04:13:18.908958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.484  [2024-12-09 04:13:18.908979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.484  [2024-12-09 04:13:18.908993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.484  [2024-12-09 04:13:18.909008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.484  [2024-12-09 04:13:18.909021] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.484  [2024-12-09 04:13:18.909030] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.484  [2024-12-09 04:13:18.909037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.484  [2024-12-09 04:13:18.918752] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.484  [2024-12-09 04:13:18.918771] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.484  [2024-12-09 04:13:18.918779] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.918786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.484  [2024-12-09 04:13:18.918809] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.919002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.484  [2024-12-09 04:13:18.919029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.484  [2024-12-09 04:13:18.919045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.484  [2024-12-09 04:13:18.919067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.484  [2024-12-09 04:13:18.919087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.484  [2024-12-09 04:13:18.919102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.484  [2024-12-09 04:13:18.919115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.484  [2024-12-09 04:13:18.919128] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.484  [2024-12-09 04:13:18.919136] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.484  [2024-12-09 04:13:18.919144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.484  [2024-12-09 04:13:18.928842] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.484  [2024-12-09 04:13:18.928861] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.484  [2024-12-09 04:13:18.928870] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.928876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.484  [2024-12-09 04:13:18.928904] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.929096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.484  [2024-12-09 04:13:18.929124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.484  [2024-12-09 04:13:18.929140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.484  [2024-12-09 04:13:18.929162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.484  [2024-12-09 04:13:18.929181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.484  [2024-12-09 04:13:18.929195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.484  [2024-12-09 04:13:18.929208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.484  [2024-12-09 04:13:18.929220] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.484  [2024-12-09 04:13:18.929228] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.484  [2024-12-09 04:13:18.929236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.484    04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]]
00:22:50.484   04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:22:50.484  [2024-12-09 04:13:18.938937] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:22:50.484  [2024-12-09 04:13:18.938957] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:22:50.484  [2024-12-09 04:13:18.938965] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.938972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:22:50.484  [2024-12-09 04:13:18.938997] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:22:50.484  [2024-12-09 04:13:18.939098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:50.484  [2024-12-09 04:13:18.939137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420
00:22:50.484  [2024-12-09 04:13:18.939153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set
00:22:50.484  [2024-12-09 04:13:18.939174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor
00:22:50.484  [2024-12-09 04:13:18.939193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:22:50.484  [2024-12-09 04:13:18.939207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:22:50.484  [2024-12-09 04:13:18.939220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:22:50.484  [2024-12-09 04:13:18.939233] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:22:50.484  [2024-12-09 04:13:18.939241] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:22:50.484  [2024-12-09 04:13:18.939249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:22:50.484  [2024-12-09 04:13:18.940430] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found
00:22:50.484  [2024-12-09 04:13:18.940464] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]'
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:51.415    04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]]
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:51.415   04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:22:51.415    04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.415     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:51.673     04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]'
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]'
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]'
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]]
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]'
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]'
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]'
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]]
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:51.673     04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4
00:22:51.673    04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.673   04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:53.042  [2024-12-09 04:13:21.225430] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:22:53.042  [2024-12-09 04:13:21.225463] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:22:53.042  [2024-12-09 04:13:21.225487] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:53.042  [2024-12-09 04:13:21.311754] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0
00:22:53.042  [2024-12-09 04:13:21.377446] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421
00:22:53.042  [2024-12-09 04:13:21.378207] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16a5550:1 started.
00:22:53.042  [2024-12-09 04:13:21.380376] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:22:53.042  [2024-12-09 04:13:21.380423] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:22:53.042  [2024-12-09 04:13:21.383334] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16a5550 was disconnected and freed. delete nvme_qpair.
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:53.042  request:
00:22:53.042  {
00:22:53.042  "name": "nvme",
00:22:53.042  "trtype": "tcp",
00:22:53.042  "traddr": "10.0.0.2",
00:22:53.042  "adrfam": "ipv4",
00:22:53.042  "trsvcid": "8009",
00:22:53.042  "hostnqn": "nqn.2021-12.io.spdk:test",
00:22:53.042  "wait_for_attach": true,
00:22:53.042  "method": "bdev_nvme_start_discovery",
00:22:53.042  "req_id": 1
00:22:53.042  }
00:22:53.042  Got JSON-RPC error response
00:22:53.042  response:
00:22:53.042  {
00:22:53.042  "code": -17,
00:22:53.042  "message": "File exists"
00:22:53.042  }
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]]
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:53.042    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.042   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:53.042  request:
00:22:53.042  {
00:22:53.042  "name": "nvme_second",
00:22:53.042  "trtype": "tcp",
00:22:53.042  "traddr": "10.0.0.2",
00:22:53.042  "adrfam": "ipv4",
00:22:53.042  "trsvcid": "8009",
00:22:53.042  "hostnqn": "nqn.2021-12.io.spdk:test",
00:22:53.042  "wait_for_attach": true,
00:22:53.042  "method": "bdev_nvme_start_discovery",
00:22:53.042  "req_id": 1
00:22:53.042  }
00:22:53.042  Got JSON-RPC error response
00:22:53.042  response:
00:22:53.042  {
00:22:53.042  "code": -17,
00:22:53.042  "message": "File exists"
00:22:53.043  }
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]]
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:53.043    04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.043   04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:54.416  [2024-12-09 04:13:22.592404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:54.416  [2024-12-09 04:13:22.592451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a510 with addr=10.0.0.2, port=8010
00:22:54.416  [2024-12-09 04:13:22.592503] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:22:54.416  [2024-12-09 04:13:22.592527] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:22:54.416  [2024-12-09 04:13:22.592539] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect
00:22:55.349  [2024-12-09 04:13:23.594853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:55.349  [2024-12-09 04:13:23.594914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a510 with addr=10.0.0.2, port=8010
00:22:55.349  [2024-12-09 04:13:23.594944] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:22:55.349  [2024-12-09 04:13:23.594960] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:22:55.349  [2024-12-09 04:13:23.594988] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect
00:22:56.282  [2024-12-09 04:13:24.596980] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr
00:22:56.282  request:
00:22:56.282  {
00:22:56.282  "name": "nvme_second",
00:22:56.282  "trtype": "tcp",
00:22:56.282  "traddr": "10.0.0.2",
00:22:56.282  "adrfam": "ipv4",
00:22:56.282  "trsvcid": "8010",
00:22:56.282  "hostnqn": "nqn.2021-12.io.spdk:test",
00:22:56.282  "wait_for_attach": false,
00:22:56.282  "attach_timeout_ms": 3000,
00:22:56.282  "method": "bdev_nvme_start_discovery",
00:22:56.282  "req_id": 1
00:22:56.282  }
00:22:56.282  Got JSON-RPC error response
00:22:56.282  response:
00:22:56.282  {
00:22:56.282  "code": -110,
00:22:56.282  "message": "Connection timed out"
00:22:56.282  }
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]]
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 307491
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:56.282  rmmod nvme_tcp
00:22:56.282  rmmod nvme_fabrics
00:22:56.282  rmmod nvme_keyring
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 307466 ']'
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 307466
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 307466 ']'
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 307466
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:56.282    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 307466
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 307466'
00:22:56.282  killing process with pid 307466
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 307466
00:22:56.282   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 307466
00:22:56.540   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:56.541   04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:56.541    04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:22:59.071  
00:22:59.071  real	0m14.180s
00:22:59.071  user	0m20.863s
00:22:59.071  sys	0m2.864s
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:22:59.071  ************************************
00:22:59.071  END TEST nvmf_host_discovery
00:22:59.071  ************************************
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:22:59.071  ************************************
00:22:59.071  START TEST nvmf_host_multipath_status
00:22:59.071  ************************************
00:22:59.071   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp
00:22:59.071  * Looking for test storage...
00:22:59.071  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-:
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-:
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<'
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:59.071     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:59.071  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:59.071  		--rc genhtml_branch_coverage=1
00:22:59.071  		--rc genhtml_function_coverage=1
00:22:59.071  		--rc genhtml_legend=1
00:22:59.071  		--rc geninfo_all_blocks=1
00:22:59.071  		--rc geninfo_unexecuted_blocks=1
00:22:59.071  		
00:22:59.071  		'
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:59.071  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:59.071  		--rc genhtml_branch_coverage=1
00:22:59.071  		--rc genhtml_function_coverage=1
00:22:59.071  		--rc genhtml_legend=1
00:22:59.071  		--rc geninfo_all_blocks=1
00:22:59.071  		--rc geninfo_unexecuted_blocks=1
00:22:59.071  		
00:22:59.071  		'
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:59.071  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:59.071  		--rc genhtml_branch_coverage=1
00:22:59.071  		--rc genhtml_function_coverage=1
00:22:59.071  		--rc genhtml_legend=1
00:22:59.071  		--rc geninfo_all_blocks=1
00:22:59.071  		--rc geninfo_unexecuted_blocks=1
00:22:59.071  		
00:22:59.071  		'
00:22:59.071    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:59.071  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:59.071  		--rc genhtml_branch_coverage=1
00:22:59.071  		--rc genhtml_function_coverage=1
00:22:59.071  		--rc genhtml_legend=1
00:22:59.071  		--rc geninfo_all_blocks=1
00:22:59.071  		--rc geninfo_unexecuted_blocks=1
00:22:59.071  		
00:22:59.071  		'
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:22:59.072     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:59.072     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:22:59.072     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob
00:22:59.072     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:59.072     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:59.072     04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:59.072      04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:59.072      04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:59.072      04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:59.072      04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH
00:22:59.072      04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:59.072  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:59.072    04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable
00:22:59.072   04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=()
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=()
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=()
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=()
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=()
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=()
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=()
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:23:00.974  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:23:00.974  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:23:00.974  Found net devices under 0000:0a:00.0: cvl_0_0
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:23:00.974  Found net devices under 0000:0a:00.1: cvl_0_1
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:23:00.974   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:23:00.975  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:23:00.975  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms
00:23:00.975  
00:23:00.975  --- 10.0.0.2 ping statistics ---
00:23:00.975  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:00.975  rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:23:00.975  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:23:00.975  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms
00:23:00.975  
00:23:00.975  --- 10.0.0.1 ping statistics ---
00:23:00.975  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:00.975  rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=310669
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 310669
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 310669 ']'
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:00.975  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:00.975   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:23:01.233  [2024-12-09 04:13:29.558198] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:23:01.233  [2024-12-09 04:13:29.558308] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:01.233  [2024-12-09 04:13:29.630624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:23:01.233  [2024-12-09 04:13:29.689106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:01.233  [2024-12-09 04:13:29.689167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:01.233  [2024-12-09 04:13:29.689189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:01.233  [2024-12-09 04:13:29.689200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:01.233  [2024-12-09 04:13:29.689209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:01.233  [2024-12-09 04:13:29.690650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:01.233  [2024-12-09 04:13:29.690656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:01.233   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:01.233   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:23:01.233   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:01.233   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:01.233   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:23:01.492   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:01.492   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=310669
00:23:01.492   04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:23:01.750  [2024-12-09 04:13:30.124118] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:01.750   04:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:23:02.008  Malloc0
00:23:02.008   04:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2
00:23:02.266   04:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:23:02.523   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:23:02.780  [2024-12-09 04:13:31.312900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:02.781   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:23:03.037  [2024-12-09 04:13:31.581640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=310953
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 310953 /var/tmp/bdevperf.sock
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 310953 ']'
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:03.037  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:03.037   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:23:03.601   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:03.601   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:23:03.602   04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:23:03.602   04:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:23:04.165  Nvme0n1
00:23:04.165   04:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:23:04.729  Nvme0n1
00:23:04.729   04:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2
00:23:04.729   04:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests
00:23:06.627   04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized
00:23:06.627   04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized
00:23:06.884   04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:23:07.447   04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1
00:23:08.379   04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true
00:23:08.379   04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:23:08.379    04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:08.379    04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:08.636   04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:08.636   04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:23:08.636    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:08.636    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:08.894   04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:08.894   04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:08.894    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:08.894    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:09.152   04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:09.152   04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:09.152    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:09.152    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:09.409   04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:09.409   04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:09.410    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:09.410    04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:09.667   04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:09.667   04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:23:09.667    04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:09.667    04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:09.925   04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:09.925   04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized
00:23:09.925   04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:10.183   04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:23:10.440   04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1
00:23:11.371   04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true
00:23:11.371   04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:23:11.371    04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:11.371    04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:11.935   04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:11.935   04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:23:11.935    04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:11.935    04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:11.935   04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:11.935   04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:11.935    04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:11.935    04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:12.500   04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:12.500   04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:12.500    04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:12.500    04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:12.758   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:12.758   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:12.758    04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:12.758    04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:13.014   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:13.014   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:23:13.014    04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:13.014    04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:13.272   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:13.272   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized
00:23:13.272   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:13.529   04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized
00:23:13.787   04:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1
00:23:14.720   04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true
00:23:14.720   04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:23:14.720    04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:14.720    04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:14.978   04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:14.978   04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:23:14.978    04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:14.978    04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:15.235   04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:15.235   04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:15.235    04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:15.235    04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:15.492   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:15.492   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:15.492    04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:15.492    04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:15.750   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:15.750   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:15.750    04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:15.750    04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:16.007   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:16.007   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:23:16.007    04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:16.007    04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:16.264   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:16.264   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible
00:23:16.264   04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:16.522   04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:23:17.084   04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1
00:23:18.029   04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false
00:23:18.029   04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:23:18.030    04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:18.030    04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:18.287   04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:18.287   04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:23:18.287    04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:18.287    04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:18.544   04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:18.544   04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:18.544    04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:18.544    04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:18.802   04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:18.803   04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:18.803    04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:18.803    04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:19.061   04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:19.061   04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:19.061    04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:19.061    04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:19.319   04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:19.319   04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:23:19.319    04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:19.319    04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:19.577   04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:19.577   04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible
00:23:19.577   04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:23:19.835   04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:23:20.093   04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1
00:23:21.026   04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false
00:23:21.026   04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:23:21.026    04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:21.026    04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:21.592   04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:21.592   04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:23:21.592    04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:21.592    04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:21.592   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:21.592   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:21.592    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:21.592    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:22.159   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:22.159   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:22.159    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:22.159    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:22.159   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:22.159   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:23:22.159    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:22.159    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:22.417   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:22.417   04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:23:22.417    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:22.417    04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:22.983   04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:22.983   04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized
00:23:22.983   04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:23:22.983   04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:23:23.244   04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1
00:23:24.613   04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true
00:23:24.613   04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:23:24.613    04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:24.613    04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:24.613   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:24.613   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:23:24.613    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:24.613    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:24.870   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:24.870   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:24.870    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:24.870    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:25.128   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:25.128   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:25.128    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:25.128    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:25.386   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:25.386   04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:23:25.386    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:25.386    04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:25.643   04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:25.643   04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:23:25.643    04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:25.643    04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:26.208   04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:26.208   04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active
00:23:26.208   04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized
00:23:26.208   04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized
00:23:26.465   04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:23:27.028   04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1
00:23:27.959   04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true
00:23:27.959   04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:23:27.959    04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:27.959    04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:28.216   04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:28.217   04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:23:28.217    04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:28.217    04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:28.474   04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:28.474   04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:28.474    04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:28.474    04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:28.732   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:28.732   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:28.732    04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:28.732    04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:28.990   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:28.990   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:28.990    04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:28.990    04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:29.248   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:29.248   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:23:29.248    04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:29.248    04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:29.506   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:29.506   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized
00:23:29.506   04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:29.764   04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:23:30.021   04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1
00:23:31.393   04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true
00:23:31.393   04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:23:31.393    04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:31.393    04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:31.393   04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:31.393   04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:23:31.393    04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:31.393    04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:31.651   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:31.651   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:31.651    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:31.651    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:31.909   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:31.909   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:31.909    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:31.909    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:32.167   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:32.167   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:32.167    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:32.167    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:32.425   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:32.425   04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:23:32.425    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:32.425    04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:32.682   04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:32.682   04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized
00:23:32.682   04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:32.939   04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized
00:23:33.196   04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1
00:23:34.585   04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true
00:23:34.585   04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:23:34.585    04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:34.585    04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:34.585   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:34.585   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:23:34.585    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:34.585    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:34.843   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:34.843   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:34.843    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:34.843    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:35.100   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:35.100   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:35.100    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:35.101    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:35.358   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:35.358   04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:35.358    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:35.358    04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:35.615   04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:35.616   04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:23:35.616    04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:35.616    04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:35.933   04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:35.933   04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible
00:23:35.933   04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:36.191   04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:23:36.448   04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1
00:23:37.398   04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false
00:23:37.398   04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:23:37.398    04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:37.398    04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:23:37.963   04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:37.963   04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:23:37.963    04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:37.963    04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:23:37.963   04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:37.963   04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:23:37.963    04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:37.963    04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:23:38.528   04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:38.528   04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:23:38.528    04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:38.529    04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:23:38.529   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:38.529   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:23:38.529    04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:38.529    04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:23:38.787   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:23:38.787   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:23:38.787    04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:23:38.787    04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:23:39.046   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:23:39.046   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 310953
00:23:39.046   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 310953 ']'
00:23:39.046   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 310953
00:23:39.046    04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:23:39.046   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:39.046    04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 310953
00:23:39.308   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:39.308   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:39.308   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 310953'
00:23:39.308  killing process with pid 310953
00:23:39.308   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 310953
00:23:39.308   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 310953
00:23:39.308  {
00:23:39.308    "results": [
00:23:39.308      {
00:23:39.308        "job": "Nvme0n1",
00:23:39.308        "core_mask": "0x4",
00:23:39.308        "workload": "verify",
00:23:39.308        "status": "terminated",
00:23:39.308        "verify_range": {
00:23:39.308          "start": 0,
00:23:39.308          "length": 16384
00:23:39.308        },
00:23:39.308        "queue_depth": 128,
00:23:39.308        "io_size": 4096,
00:23:39.308        "runtime": 34.383602,
00:23:39.308        "iops": 7962.807387079457,
00:23:39.308        "mibps": 31.10471635577913,
00:23:39.308        "io_failed": 0,
00:23:39.308        "io_timeout": 0,
00:23:39.308        "avg_latency_us": 16047.606952438542,
00:23:39.308        "min_latency_us": 588.6103703703703,
00:23:39.308        "max_latency_us": 4026531.84
00:23:39.308      }
00:23:39.308    ],
00:23:39.308    "core_count": 1
00:23:39.308  }
00:23:39.308   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 310953
00:23:39.308   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:23:39.308  [2024-12-09 04:13:31.648032] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:23:39.308  [2024-12-09 04:13:31.648128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310953 ]
00:23:39.308  [2024-12-09 04:13:31.715078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:39.308  [2024-12-09 04:13:31.772385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:39.308  Running I/O for 90 seconds...
00:23:39.308       8402.00 IOPS,    32.82 MiB/s
[2024-12-09T03:14:07.884Z]      8476.00 IOPS,    33.11 MiB/s
[2024-12-09T03:14:07.884Z]      8471.33 IOPS,    33.09 MiB/s
[2024-12-09T03:14:07.884Z]      8452.00 IOPS,    33.02 MiB/s
[2024-12-09T03:14:07.884Z]      8482.60 IOPS,    33.14 MiB/s
[2024-12-09T03:14:07.884Z]      8486.00 IOPS,    33.15 MiB/s
[2024-12-09T03:14:07.884Z]      8476.00 IOPS,    33.11 MiB/s
[2024-12-09T03:14:07.884Z]      8470.25 IOPS,    33.09 MiB/s
[2024-12-09T03:14:07.884Z]      8487.89 IOPS,    33.16 MiB/s
[2024-12-09T03:14:07.884Z]      8489.90 IOPS,    33.16 MiB/s
[2024-12-09T03:14:07.884Z]      8481.18 IOPS,    33.13 MiB/s
[2024-12-09T03:14:07.884Z]      8464.08 IOPS,    33.06 MiB/s
[2024-12-09T03:14:07.884Z]      8463.54 IOPS,    33.06 MiB/s
[2024-12-09T03:14:07.884Z]      8457.57 IOPS,    33.04 MiB/s
[2024-12-09T03:14:07.884Z]      8450.73 IOPS,    33.01 MiB/s
[2024-12-09T03:14:07.884Z] [2024-12-09 04:13:48.271343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.271982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.308  [2024-12-09 04:13:48.271998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:23:39.308  [2024-12-09 04:13:48.272020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.272957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.272983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.309  [2024-12-09 04:13:48.273042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.309  [2024-12-09 04:13:48.273083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.309  [2024-12-09 04:13:48.273125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.309  [2024-12-09 04:13:48.273166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.309  [2024-12-09 04:13:48.273207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.309  [2024-12-09 04:13:48.273248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.309  [2024-12-09 04:13:48.273319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.273976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.273998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.274014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.274037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.274052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:23:39.309  [2024-12-09 04:13:48.274075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.309  [2024-12-09 04:13:48.274090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.274887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.274907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.310  [2024-12-09 04:13:48.275813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.310  [2024-12-09 04:13:48.275854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.310  [2024-12-09 04:13:48.275894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:23:39.310  [2024-12-09 04:13:48.275919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.310  [2024-12-09 04:13:48.275935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.275960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.275975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.311  [2024-12-09 04:13:48.276506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.276978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.276995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:23:39.311  [2024-12-09 04:13:48.277542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.311  [2024-12-09 04:13:48.277559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:23:39.311       7938.56 IOPS,    31.01 MiB/s
[2024-12-09T03:14:07.887Z]      7471.59 IOPS,    29.19 MiB/s
[2024-12-09T03:14:07.887Z]      7056.50 IOPS,    27.56 MiB/s
[2024-12-09T03:14:07.887Z]      6685.11 IOPS,    26.11 MiB/s
[2024-12-09T03:14:07.887Z]      6764.75 IOPS,    26.42 MiB/s
[2024-12-09T03:14:07.887Z]      6842.43 IOPS,    26.73 MiB/s
[2024-12-09T03:14:07.887Z]      6943.73 IOPS,    27.12 MiB/s
[2024-12-09T03:14:07.887Z]      7128.57 IOPS,    27.85 MiB/s
[2024-12-09T03:14:07.887Z]      7292.46 IOPS,    28.49 MiB/s
[2024-12-09T03:14:07.887Z]      7438.92 IOPS,    29.06 MiB/s
[2024-12-09T03:14:07.888Z]      7484.54 IOPS,    29.24 MiB/s
[2024-12-09T03:14:07.888Z]      7522.67 IOPS,    29.39 MiB/s
[2024-12-09T03:14:07.888Z]      7557.79 IOPS,    29.52 MiB/s
[2024-12-09T03:14:07.888Z]      7630.93 IOPS,    29.81 MiB/s
[2024-12-09T03:14:07.888Z]      7753.23 IOPS,    30.29 MiB/s
[2024-12-09T03:14:07.888Z]      7857.32 IOPS,    30.69 MiB/s
[2024-12-09T03:14:07.888Z] [2024-12-09 04:14:04.949405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.949975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.950575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.950972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.312  [2024-12-09 04:14:04.950988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:23:39.312  [2024-12-09 04:14:04.951010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.312  [2024-12-09 04:14:04.951026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.951653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.951960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.951976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.952085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:39.313  [2024-12-09 04:14:04.952124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:23:39.313  [2024-12-09 04:14:04.952406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:39.313  [2024-12-09 04:14:04.952423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:23:39.313       7933.78 IOPS,    30.99 MiB/s
[2024-12-09T03:14:07.889Z]      7948.09 IOPS,    31.05 MiB/s
[2024-12-09T03:14:07.889Z]      7961.38 IOPS,    31.10 MiB/s
[2024-12-09T03:14:07.889Z] Received shutdown signal, test time was about 34.384464 seconds
00:23:39.313  
00:23:39.313                                                                                                  Latency(us)
00:23:39.313  
[2024-12-09T03:14:07.889Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:39.313  Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:23:39.313  	 Verification LBA range: start 0x0 length 0x4000
00:23:39.313  	 Nvme0n1             :      34.38    7962.81      31.10       0.00     0.00   16047.61     588.61 4026531.84
00:23:39.313  
[2024-12-09T03:14:07.889Z]  ===================================================================================================================
00:23:39.313  
[2024-12-09T03:14:07.889Z]  Total                       :               7962.81      31.10       0.00     0.00   16047.61     588.61 4026531.84
00:23:39.313   04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20}
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:23:39.880  rmmod nvme_tcp
00:23:39.880  rmmod nvme_fabrics
00:23:39.880  rmmod nvme_keyring
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 310669 ']'
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 310669
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 310669 ']'
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 310669
00:23:39.880    04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:39.880    04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 310669
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 310669'
00:23:39.880  killing process with pid 310669
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 310669
00:23:39.880   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 310669
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:40.140   04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:40.140    04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:23:42.046  
00:23:42.046  real	0m43.463s
00:23:42.046  user	2m12.626s
00:23:42.046  sys	0m10.599s
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:23:42.046  ************************************
00:23:42.046  END TEST nvmf_host_multipath_status
00:23:42.046  ************************************
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:23:42.046  ************************************
00:23:42.046  START TEST nvmf_discovery_remove_ifc
00:23:42.046  ************************************
00:23:42.046   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:23:42.306  * Looking for test storage...
00:23:42.306  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-:
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-:
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<'
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 ))
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:23:42.306     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:23:42.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:42.306  		--rc genhtml_branch_coverage=1
00:23:42.306  		--rc genhtml_function_coverage=1
00:23:42.306  		--rc genhtml_legend=1
00:23:42.306  		--rc geninfo_all_blocks=1
00:23:42.306  		--rc geninfo_unexecuted_blocks=1
00:23:42.306  		
00:23:42.306  		'
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:23:42.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:42.306  		--rc genhtml_branch_coverage=1
00:23:42.306  		--rc genhtml_function_coverage=1
00:23:42.306  		--rc genhtml_legend=1
00:23:42.306  		--rc geninfo_all_blocks=1
00:23:42.306  		--rc geninfo_unexecuted_blocks=1
00:23:42.306  		
00:23:42.306  		'
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:23:42.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:42.306  		--rc genhtml_branch_coverage=1
00:23:42.306  		--rc genhtml_function_coverage=1
00:23:42.306  		--rc genhtml_legend=1
00:23:42.306  		--rc geninfo_all_blocks=1
00:23:42.306  		--rc geninfo_unexecuted_blocks=1
00:23:42.306  		
00:23:42.306  		'
00:23:42.306    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:23:42.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:42.307  		--rc genhtml_branch_coverage=1
00:23:42.307  		--rc genhtml_function_coverage=1
00:23:42.307  		--rc genhtml_legend=1
00:23:42.307  		--rc geninfo_all_blocks=1
00:23:42.307  		--rc geninfo_unexecuted_blocks=1
00:23:42.307  		
00:23:42.307  		'
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:23:42.307     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:23:42.307     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:23:42.307     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob
00:23:42.307     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:23:42.307     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:23:42.307     04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:23:42.307      04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:42.307      04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:42.307      04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:42.307      04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH
00:23:42.307      04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:23:42.307  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']'
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:42.307    04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable
00:23:42.307   04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=()
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=()
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=()
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=()
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=()
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=()
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=()
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:23:44.839  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:23:44.839  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:23:44.839   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:23:44.840  Found net devices under 0000:0a:00.0: cvl_0_0
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:23:44.840  Found net devices under 0000:0a:00.1: cvl_0_1
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:23:44.840   04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:23:44.840  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:23:44.840  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms
00:23:44.840  
00:23:44.840  --- 10.0.0.2 ping statistics ---
00:23:44.840  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:44.840  rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:23:44.840  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:23:44.840  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms
00:23:44.840  
00:23:44.840  --- 10.0.0.1 ping statistics ---
00:23:44.840  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:44.840  rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=317420
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 317420
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 317420 ']'
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:44.840  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:44.840   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:44.840  [2024-12-09 04:14:13.261179] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:23:44.840  [2024-12-09 04:14:13.261247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:44.840  [2024-12-09 04:14:13.331914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:44.840  [2024-12-09 04:14:13.387901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:44.840  [2024-12-09 04:14:13.387955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:44.840  [2024-12-09 04:14:13.387977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:44.840  [2024-12-09 04:14:13.387988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:44.840  [2024-12-09 04:14:13.387998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:44.840  [2024-12-09 04:14:13.388634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:45.099  [2024-12-09 04:14:13.533903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:45.099  [2024-12-09 04:14:13.542083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:23:45.099  null0
00:23:45.099  [2024-12-09 04:14:13.574031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=317449
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 317449 /tmp/host.sock
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 317449 ']'
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:23:45.099  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:45.099   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:45.099  [2024-12-09 04:14:13.642165] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:23:45.099  [2024-12-09 04:14:13.642257] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317449 ]
00:23:45.357  [2024-12-09 04:14:13.714312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:45.357  [2024-12-09 04:14:13.771314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:45.357   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:45.636   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:45.636   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach
00:23:45.636   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:45.637   04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:46.569  [2024-12-09 04:14:15.041388] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:23:46.569  [2024-12-09 04:14:15.041421] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:23:46.569  [2024-12-09 04:14:15.041445] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:46.569  [2024-12-09 04:14:15.127757] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0
00:23:46.826  [2024-12-09 04:14:15.342949] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420
00:23:46.826  [2024-12-09 04:14:15.344097] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb4f650:1 started.
00:23:46.826  [2024-12-09 04:14:15.345817] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:23:46.826  [2024-12-09 04:14:15.345877] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:23:46.826  [2024-12-09 04:14:15.345914] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:23:46.826  [2024-12-09 04:14:15.345938] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:23:46.826  [2024-12-09 04:14:15.345978] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:23:46.826   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:46.826   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:46.826  [2024-12-09 04:14:15.350375] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb4f650 was disconnected and freed. delete nvme_qpair.
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:46.826    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:46.826   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]]
00:23:46.826   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0
00:23:46.826   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down
00:23:47.083   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev ''
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:47.083    04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:47.083   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:23:47.083   04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:48.011    04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:48.011   04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:23:48.011   04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:49.386    04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.386   04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:23:49.386   04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:50.468    04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:50.468   04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:23:50.468   04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:51.157    04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:51.157   04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:23:51.157   04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:52.176    04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:52.176   04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:23:52.176   04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:52.435  [2024-12-09 04:14:20.787192] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out
00:23:52.435  [2024-12-09 04:14:20.787288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:52.435  [2024-12-09 04:14:20.787320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:52.435  [2024-12-09 04:14:20.787338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:52.435  [2024-12-09 04:14:20.787351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:52.435  [2024-12-09 04:14:20.787364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:52.435  [2024-12-09 04:14:20.787378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:52.435  [2024-12-09 04:14:20.787392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:52.435  [2024-12-09 04:14:20.787405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:52.435  [2024-12-09 04:14:20.787419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:52.435  [2024-12-09 04:14:20.787432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:52.435  [2024-12-09 04:14:20.787456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2be90 is same with the state(6) to be set
00:23:52.435  [2024-12-09 04:14:20.797212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2be90 (9): Bad file descriptor
00:23:52.435  [2024-12-09 04:14:20.807267] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:23:52.435  [2024-12-09 04:14:20.807297] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:23:52.435  [2024-12-09 04:14:20.807311] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:23:52.435  [2024-12-09 04:14:20.807326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:23:52.435  [2024-12-09 04:14:20.807376] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:53.367  [2024-12-09 04:14:21.817299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110
00:23:53.367  [2024-12-09 04:14:21.817344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2be90 with addr=10.0.0.2, port=4420
00:23:53.367  [2024-12-09 04:14:21.817363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2be90 is same with the state(6) to be set
00:23:53.367  [2024-12-09 04:14:21.817391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2be90 (9): Bad file descriptor
00:23:53.367  [2024-12-09 04:14:21.817818] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress.
00:23:53.367  [2024-12-09 04:14:21.817855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:23:53.367  [2024-12-09 04:14:21.817871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:23:53.367  [2024-12-09 04:14:21.817887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:23:53.367  [2024-12-09 04:14:21.817901] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:23:53.367  [2024-12-09 04:14:21.817911] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:23:53.367  [2024-12-09 04:14:21.817919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:23:53.367  [2024-12-09 04:14:21.817932] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:23:53.367  [2024-12-09 04:14:21.817941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:23:53.367    04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:53.367   04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:23:53.367   04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:54.301  [2024-12-09 04:14:22.820438] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:23:54.301  [2024-12-09 04:14:22.820503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:23:54.301  [2024-12-09 04:14:22.820538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:23:54.301  [2024-12-09 04:14:22.820567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:23:54.301  [2024-12-09 04:14:22.820581] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state
00:23:54.301  [2024-12-09 04:14:22.820595] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:23:54.301  [2024-12-09 04:14:22.820632] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:23:54.301  [2024-12-09 04:14:22.820640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:23:54.301  [2024-12-09 04:14:22.820702] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420
00:23:54.301  [2024-12-09 04:14:22.820781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:54.301  [2024-12-09 04:14:22.820804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:54.301  [2024-12-09 04:14:22.820823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:54.301  [2024-12-09 04:14:22.820837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:54.301  [2024-12-09 04:14:22.820853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:54.301  [2024-12-09 04:14:22.820867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:54.301  [2024-12-09 04:14:22.820881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:54.301  [2024-12-09 04:14:22.820894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:54.301  [2024-12-09 04:14:22.820908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:54.301  [2024-12-09 04:14:22.820920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:54.301  [2024-12-09 04:14:22.820933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state.
00:23:54.301  [2024-12-09 04:14:22.820990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1b5e0 (9): Bad file descriptor
00:23:54.301  [2024-12-09 04:14:22.821975] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command
00:23:54.301  [2024-12-09 04:14:22.821996] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:54.301    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:54.560   04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]]
00:23:54.560   04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:23:54.560   04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:23:54.560   04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:54.560    04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:54.560   04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:23:54.560   04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:55.494    04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:55.494   04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:23:55.494   04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:56.429  [2024-12-09 04:14:24.877914] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:23:56.429  [2024-12-09 04:14:24.877949] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:23:56.429  [2024-12-09 04:14:24.877973] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:56.685  [2024-12-09 04:14:25.007368] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:56.685    04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:56.685   04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:23:56.685   04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:23:56.685  [2024-12-09 04:14:25.229585] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420
00:23:56.686  [2024-12-09 04:14:25.230429] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xb58f60:1 started.
00:23:56.686  [2024-12-09 04:14:25.231791] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:23:56.686  [2024-12-09 04:14:25.231834] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:23:56.686  [2024-12-09 04:14:25.231864] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:23:56.686  [2024-12-09 04:14:25.231887] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done
00:23:56.686  [2024-12-09 04:14:25.231901] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:23:56.686  [2024-12-09 04:14:25.236483] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xb58f60 was disconnected and freed. delete nvme_qpair.
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]]
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 317449
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 317449 ']'
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 317449
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:57.616    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317449
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317449'
00:23:57.616  killing process with pid 317449
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 317449
00:23:57.616   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 317449
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20}
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:23:57.873  rmmod nvme_tcp
00:23:57.873  rmmod nvme_fabrics
00:23:57.873  rmmod nvme_keyring
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 317420 ']'
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 317420
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 317420 ']'
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 317420
00:23:57.873    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:57.873    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317420
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317420'
00:23:57.873  killing process with pid 317420
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 317420
00:23:57.873   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 317420
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:58.130   04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:58.130    04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:24:00.664  
00:24:00.664  real	0m18.141s
00:24:00.664  user	0m26.092s
00:24:00.664  sys	0m3.108s
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:24:00.664  ************************************
00:24:00.664  END TEST nvmf_discovery_remove_ifc
00:24:00.664  ************************************
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:24:00.664  ************************************
00:24:00.664  START TEST nvmf_identify_kernel_target
00:24:00.664  ************************************
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp
00:24:00.664  * Looking for test storage...
00:24:00.664  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-:
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-:
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<'
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:00.664  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:00.664  		--rc genhtml_branch_coverage=1
00:24:00.664  		--rc genhtml_function_coverage=1
00:24:00.664  		--rc genhtml_legend=1
00:24:00.664  		--rc geninfo_all_blocks=1
00:24:00.664  		--rc geninfo_unexecuted_blocks=1
00:24:00.664  		
00:24:00.664  		'
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:00.664  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:00.664  		--rc genhtml_branch_coverage=1
00:24:00.664  		--rc genhtml_function_coverage=1
00:24:00.664  		--rc genhtml_legend=1
00:24:00.664  		--rc geninfo_all_blocks=1
00:24:00.664  		--rc geninfo_unexecuted_blocks=1
00:24:00.664  		
00:24:00.664  		'
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:00.664  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:00.664  		--rc genhtml_branch_coverage=1
00:24:00.664  		--rc genhtml_function_coverage=1
00:24:00.664  		--rc genhtml_legend=1
00:24:00.664  		--rc geninfo_all_blocks=1
00:24:00.664  		--rc geninfo_unexecuted_blocks=1
00:24:00.664  		
00:24:00.664  		'
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:00.664  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:00.664  		--rc genhtml_branch_coverage=1
00:24:00.664  		--rc genhtml_function_coverage=1
00:24:00.664  		--rc genhtml_legend=1
00:24:00.664  		--rc geninfo_all_blocks=1
00:24:00.664  		--rc geninfo_unexecuted_blocks=1
00:24:00.664  		
00:24:00.664  		'
00:24:00.664   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:00.664    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:00.664     04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:00.664      04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:00.664      04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:00.664      04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:00.664      04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH
00:24:00.665      04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:00.665  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:00.665    04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable
00:24:00.665   04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=()
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=()
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=()
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=()
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=()
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:24:02.568  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:24:02.568  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:24:02.568  Found net devices under 0000:0a:00.0: cvl_0_0
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:24:02.568  Found net devices under 0000:0a:00.1: cvl_0_1
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:24:02.568   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:24:02.569   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:24:02.828   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:24:02.828   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:24:02.828   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:24:02.828   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:24:02.828   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:24:02.829  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:24:02.829  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms
00:24:02.829  
00:24:02.829  --- 10.0.0.2 ping statistics ---
00:24:02.829  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:02.829  rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:24:02.829  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:24:02.829  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms
00:24:02.829  
00:24:02.829  --- 10.0.0.1 ping statistics ---
00:24:02.829  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:02.829  rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:02.829    04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:24:02.829   04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:24:04.203  Waiting for block devices as requested
00:24:04.203  0000:88:00.0 (8086 0a54): vfio-pci -> nvme
00:24:04.203  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:24:04.204  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:24:04.461  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:24:04.461  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:24:04.461  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:24:04.461  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:24:04.720  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:24:04.720  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:24:04.720  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:24:04.979  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:24:04.979  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:24:04.979  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:24:04.979  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:24:05.237  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:24:05.237  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:24:05.237  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:24:05.496  No valid GPT data, bailing
00:24:05.496    04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt=
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
00:24:05.496   04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420
00:24:05.496  
00:24:05.496  Discovery Log Number of Records 2, Generation counter 2
00:24:05.496  =====Discovery Log Entry 0======
00:24:05.496  trtype:  tcp
00:24:05.496  adrfam:  ipv4
00:24:05.496  subtype: current discovery subsystem
00:24:05.496  treq:    not specified, sq flow control disable supported
00:24:05.496  portid:  1
00:24:05.496  trsvcid: 4420
00:24:05.496  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:24:05.496  traddr:  10.0.0.1
00:24:05.496  eflags:  none
00:24:05.496  sectype: none
00:24:05.496  =====Discovery Log Entry 1======
00:24:05.496  trtype:  tcp
00:24:05.496  adrfam:  ipv4
00:24:05.496  subtype: nvme subsystem
00:24:05.496  treq:    not specified, sq flow control disable supported
00:24:05.496  portid:  1
00:24:05.496  trsvcid: 4420
00:24:05.496  subnqn:  nqn.2016-06.io.spdk:testnqn
00:24:05.496  traddr:  10.0.0.1
00:24:05.496  eflags:  none
00:24:05.496  sectype: none
00:24:05.496   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '	trtype:tcp 	adrfam:IPv4 	traddr:10.0.0.1
00:24:05.496  	trsvcid:4420 	subnqn:nqn.2014-08.org.nvmexpress.discovery'
00:24:05.757  =====================================================
00:24:05.757  NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery
00:24:05.757  =====================================================
00:24:05.757  Controller Capabilities/Features
00:24:05.757  ================================
00:24:05.757  Vendor ID:                             0000
00:24:05.757  Subsystem Vendor ID:                   0000
00:24:05.757  Serial Number:                         bc9bbce0182711bd5c2b
00:24:05.757  Model Number:                          Linux
00:24:05.757  Firmware Version:                      6.8.9-20
00:24:05.757  Recommended Arb Burst:                 0
00:24:05.757  IEEE OUI Identifier:                   00 00 00
00:24:05.757  Multi-path I/O
00:24:05.757    May have multiple subsystem ports:   No
00:24:05.757    May have multiple controllers:       No
00:24:05.757    Associated with SR-IOV VF:           No
00:24:05.757  Max Data Transfer Size:                Unlimited
00:24:05.757  Max Number of Namespaces:              0
00:24:05.757  Max Number of I/O Queues:              1024
00:24:05.757  NVMe Specification Version (VS):       1.3
00:24:05.757  NVMe Specification Version (Identify): 1.3
00:24:05.757  Maximum Queue Entries:                 1024
00:24:05.757  Contiguous Queues Required:            No
00:24:05.757  Arbitration Mechanisms Supported
00:24:05.757    Weighted Round Robin:                Not Supported
00:24:05.757    Vendor Specific:                     Not Supported
00:24:05.757  Reset Timeout:                         7500 ms
00:24:05.757  Doorbell Stride:                       4 bytes
00:24:05.757  NVM Subsystem Reset:                   Not Supported
00:24:05.757  Command Sets Supported
00:24:05.757    NVM Command Set:                     Supported
00:24:05.757  Boot Partition:                        Not Supported
00:24:05.757  Memory Page Size Minimum:              4096 bytes
00:24:05.757  Memory Page Size Maximum:              4096 bytes
00:24:05.757  Persistent Memory Region:              Not Supported
00:24:05.757  Optional Asynchronous Events Supported
00:24:05.757    Namespace Attribute Notices:         Not Supported
00:24:05.757    Firmware Activation Notices:         Not Supported
00:24:05.757    ANA Change Notices:                  Not Supported
00:24:05.757    PLE Aggregate Log Change Notices:    Not Supported
00:24:05.757    LBA Status Info Alert Notices:       Not Supported
00:24:05.757    EGE Aggregate Log Change Notices:    Not Supported
00:24:05.757    Normal NVM Subsystem Shutdown event: Not Supported
00:24:05.757    Zone Descriptor Change Notices:      Not Supported
00:24:05.757    Discovery Log Change Notices:        Supported
00:24:05.757  Controller Attributes
00:24:05.757    128-bit Host Identifier:             Not Supported
00:24:05.757    Non-Operational Permissive Mode:     Not Supported
00:24:05.757    NVM Sets:                            Not Supported
00:24:05.757    Read Recovery Levels:                Not Supported
00:24:05.757    Endurance Groups:                    Not Supported
00:24:05.757    Predictable Latency Mode:            Not Supported
00:24:05.757    Traffic Based Keep ALive:            Not Supported
00:24:05.757    Namespace Granularity:               Not Supported
00:24:05.757    SQ Associations:                     Not Supported
00:24:05.757    UUID List:                           Not Supported
00:24:05.757    Multi-Domain Subsystem:              Not Supported
00:24:05.757    Fixed Capacity Management:           Not Supported
00:24:05.757    Variable Capacity Management:        Not Supported
00:24:05.757    Delete Endurance Group:              Not Supported
00:24:05.757    Delete NVM Set:                      Not Supported
00:24:05.757    Extended LBA Formats Supported:      Not Supported
00:24:05.757    Flexible Data Placement Supported:   Not Supported
00:24:05.757  
00:24:05.757  Controller Memory Buffer Support
00:24:05.757  ================================
00:24:05.757  Supported:                             No
00:24:05.757  
00:24:05.757  Persistent Memory Region Support
00:24:05.757  ================================
00:24:05.757  Supported:                             No
00:24:05.757  
00:24:05.757  Admin Command Set Attributes
00:24:05.757  ============================
00:24:05.757  Security Send/Receive:                 Not Supported
00:24:05.757  Format NVM:                            Not Supported
00:24:05.757  Firmware Activate/Download:            Not Supported
00:24:05.757  Namespace Management:                  Not Supported
00:24:05.757  Device Self-Test:                      Not Supported
00:24:05.757  Directives:                            Not Supported
00:24:05.757  NVMe-MI:                               Not Supported
00:24:05.757  Virtualization Management:             Not Supported
00:24:05.757  Doorbell Buffer Config:                Not Supported
00:24:05.757  Get LBA Status Capability:             Not Supported
00:24:05.757  Command & Feature Lockdown Capability: Not Supported
00:24:05.757  Abort Command Limit:                   1
00:24:05.757  Async Event Request Limit:             1
00:24:05.757  Number of Firmware Slots:              N/A
00:24:05.757  Firmware Slot 1 Read-Only:             N/A
00:24:05.757  Firmware Activation Without Reset:     N/A
00:24:05.757  Multiple Update Detection Support:     N/A
00:24:05.757  Firmware Update Granularity:           No Information Provided
00:24:05.757  Per-Namespace SMART Log:               No
00:24:05.757  Asymmetric Namespace Access Log Page:  Not Supported
00:24:05.757  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:24:05.757  Command Effects Log Page:              Not Supported
00:24:05.757  Get Log Page Extended Data:            Supported
00:24:05.757  Telemetry Log Pages:                   Not Supported
00:24:05.757  Persistent Event Log Pages:            Not Supported
00:24:05.757  Supported Log Pages Log Page:          May Support
00:24:05.758  Commands Supported & Effects Log Page: Not Supported
00:24:05.758  Feature Identifiers & Effects Log Page:May Support
00:24:05.758  NVMe-MI Commands & Effects Log Page:   May Support
00:24:05.758  Data Area 4 for Telemetry Log:         Not Supported
00:24:05.758  Error Log Page Entries Supported:      1
00:24:05.758  Keep Alive:                            Not Supported
00:24:05.758  
00:24:05.758  NVM Command Set Attributes
00:24:05.758  ==========================
00:24:05.758  Submission Queue Entry Size
00:24:05.758    Max:                       1
00:24:05.758    Min:                       1
00:24:05.758  Completion Queue Entry Size
00:24:05.758    Max:                       1
00:24:05.758    Min:                       1
00:24:05.758  Number of Namespaces:        0
00:24:05.758  Compare Command:             Not Supported
00:24:05.758  Write Uncorrectable Command: Not Supported
00:24:05.758  Dataset Management Command:  Not Supported
00:24:05.758  Write Zeroes Command:        Not Supported
00:24:05.758  Set Features Save Field:     Not Supported
00:24:05.758  Reservations:                Not Supported
00:24:05.758  Timestamp:                   Not Supported
00:24:05.758  Copy:                        Not Supported
00:24:05.758  Volatile Write Cache:        Not Present
00:24:05.758  Atomic Write Unit (Normal):  1
00:24:05.758  Atomic Write Unit (PFail):   1
00:24:05.758  Atomic Compare & Write Unit: 1
00:24:05.758  Fused Compare & Write:       Not Supported
00:24:05.758  Scatter-Gather List
00:24:05.758    SGL Command Set:           Supported
00:24:05.758    SGL Keyed:                 Not Supported
00:24:05.758    SGL Bit Bucket Descriptor: Not Supported
00:24:05.758    SGL Metadata Pointer:      Not Supported
00:24:05.758    Oversized SGL:             Not Supported
00:24:05.758    SGL Metadata Address:      Not Supported
00:24:05.758    SGL Offset:                Supported
00:24:05.758    Transport SGL Data Block:  Not Supported
00:24:05.758  Replay Protected Memory Block:  Not Supported
00:24:05.758  
00:24:05.758  Firmware Slot Information
00:24:05.758  =========================
00:24:05.758  Active slot:                 0
00:24:05.758  
00:24:05.758  
00:24:05.758  Error Log
00:24:05.758  =========
00:24:05.758  
00:24:05.758  Active Namespaces
00:24:05.758  =================
00:24:05.758  Discovery Log Page
00:24:05.758  ==================
00:24:05.758  Generation Counter:                    2
00:24:05.758  Number of Records:                     2
00:24:05.758  Record Format:                         0
00:24:05.758  
00:24:05.758  Discovery Log Entry 0
00:24:05.758  ----------------------
00:24:05.758  Transport Type:                        3 (TCP)
00:24:05.758  Address Family:                        1 (IPv4)
00:24:05.758  Subsystem Type:                        3 (Current Discovery Subsystem)
00:24:05.758  Entry Flags:
00:24:05.758    Duplicate Returned Information:			0
00:24:05.758    Explicit Persistent Connection Support for Discovery: 0
00:24:05.758  Transport Requirements:
00:24:05.758    Secure Channel:                      Not Specified
00:24:05.758  Port ID:                               1 (0x0001)
00:24:05.758  Controller ID:                         65535 (0xffff)
00:24:05.758  Admin Max SQ Size:                     32
00:24:05.758  Transport Service Identifier:          4420
00:24:05.758  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:24:05.758  Transport Address:                     10.0.0.1
00:24:05.758  Discovery Log Entry 1
00:24:05.758  ----------------------
00:24:05.758  Transport Type:                        3 (TCP)
00:24:05.758  Address Family:                        1 (IPv4)
00:24:05.758  Subsystem Type:                        2 (NVM Subsystem)
00:24:05.758  Entry Flags:
00:24:05.758    Duplicate Returned Information:			0
00:24:05.758    Explicit Persistent Connection Support for Discovery: 0
00:24:05.758  Transport Requirements:
00:24:05.758    Secure Channel:                      Not Specified
00:24:05.758  Port ID:                               1 (0x0001)
00:24:05.758  Controller ID:                         65535 (0xffff)
00:24:05.758  Admin Max SQ Size:                     32
00:24:05.758  Transport Service Identifier:          4420
00:24:05.758  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:testnqn
00:24:05.758  Transport Address:                     10.0.0.1
00:24:05.758   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '	trtype:tcp 	adrfam:IPv4 	traddr:10.0.0.1 	trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:testnqn'
00:24:05.758  get_feature(0x01) failed
00:24:05.758  get_feature(0x02) failed
00:24:05.758  get_feature(0x04) failed
00:24:05.758  =====================================================
00:24:05.758  NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:24:05.758  =====================================================
00:24:05.758  Controller Capabilities/Features
00:24:05.758  ================================
00:24:05.758  Vendor ID:                             0000
00:24:05.758  Subsystem Vendor ID:                   0000
00:24:05.758  Serial Number:                         4d4edf5658b0b299facd
00:24:05.758  Model Number:                          SPDK-nqn.2016-06.io.spdk:testnqn
00:24:05.758  Firmware Version:                      6.8.9-20
00:24:05.758  Recommended Arb Burst:                 6
00:24:05.758  IEEE OUI Identifier:                   00 00 00
00:24:05.758  Multi-path I/O
00:24:05.758    May have multiple subsystem ports:   Yes
00:24:05.758    May have multiple controllers:       Yes
00:24:05.758    Associated with SR-IOV VF:           No
00:24:05.758  Max Data Transfer Size:                Unlimited
00:24:05.758  Max Number of Namespaces:              1024
00:24:05.758  Max Number of I/O Queues:              128
00:24:05.758  NVMe Specification Version (VS):       1.3
00:24:05.758  NVMe Specification Version (Identify): 1.3
00:24:05.758  Maximum Queue Entries:                 1024
00:24:05.758  Contiguous Queues Required:            No
00:24:05.758  Arbitration Mechanisms Supported
00:24:05.758    Weighted Round Robin:                Not Supported
00:24:05.758    Vendor Specific:                     Not Supported
00:24:05.758  Reset Timeout:                         7500 ms
00:24:05.758  Doorbell Stride:                       4 bytes
00:24:05.758  NVM Subsystem Reset:                   Not Supported
00:24:05.758  Command Sets Supported
00:24:05.758    NVM Command Set:                     Supported
00:24:05.758  Boot Partition:                        Not Supported
00:24:05.758  Memory Page Size Minimum:              4096 bytes
00:24:05.758  Memory Page Size Maximum:              4096 bytes
00:24:05.758  Persistent Memory Region:              Not Supported
00:24:05.758  Optional Asynchronous Events Supported
00:24:05.758    Namespace Attribute Notices:         Supported
00:24:05.758    Firmware Activation Notices:         Not Supported
00:24:05.758    ANA Change Notices:                  Supported
00:24:05.758    PLE Aggregate Log Change Notices:    Not Supported
00:24:05.758    LBA Status Info Alert Notices:       Not Supported
00:24:05.758    EGE Aggregate Log Change Notices:    Not Supported
00:24:05.758    Normal NVM Subsystem Shutdown event: Not Supported
00:24:05.758    Zone Descriptor Change Notices:      Not Supported
00:24:05.758    Discovery Log Change Notices:        Not Supported
00:24:05.758  Controller Attributes
00:24:05.758    128-bit Host Identifier:             Supported
00:24:05.758    Non-Operational Permissive Mode:     Not Supported
00:24:05.758    NVM Sets:                            Not Supported
00:24:05.758    Read Recovery Levels:                Not Supported
00:24:05.758    Endurance Groups:                    Not Supported
00:24:05.758    Predictable Latency Mode:            Not Supported
00:24:05.758    Traffic Based Keep ALive:            Supported
00:24:05.758    Namespace Granularity:               Not Supported
00:24:05.758    SQ Associations:                     Not Supported
00:24:05.758    UUID List:                           Not Supported
00:24:05.758    Multi-Domain Subsystem:              Not Supported
00:24:05.758    Fixed Capacity Management:           Not Supported
00:24:05.758    Variable Capacity Management:        Not Supported
00:24:05.758    Delete Endurance Group:              Not Supported
00:24:05.758    Delete NVM Set:                      Not Supported
00:24:05.758    Extended LBA Formats Supported:      Not Supported
00:24:05.758    Flexible Data Placement Supported:   Not Supported
00:24:05.758  
00:24:05.758  Controller Memory Buffer Support
00:24:05.758  ================================
00:24:05.758  Supported:                             No
00:24:05.758  
00:24:05.758  Persistent Memory Region Support
00:24:05.758  ================================
00:24:05.758  Supported:                             No
00:24:05.758  
00:24:05.758  Admin Command Set Attributes
00:24:05.758  ============================
00:24:05.758  Security Send/Receive:                 Not Supported
00:24:05.758  Format NVM:                            Not Supported
00:24:05.758  Firmware Activate/Download:            Not Supported
00:24:05.758  Namespace Management:                  Not Supported
00:24:05.758  Device Self-Test:                      Not Supported
00:24:05.758  Directives:                            Not Supported
00:24:05.758  NVMe-MI:                               Not Supported
00:24:05.758  Virtualization Management:             Not Supported
00:24:05.758  Doorbell Buffer Config:                Not Supported
00:24:05.758  Get LBA Status Capability:             Not Supported
00:24:05.758  Command & Feature Lockdown Capability: Not Supported
00:24:05.758  Abort Command Limit:                   4
00:24:05.758  Async Event Request Limit:             4
00:24:05.758  Number of Firmware Slots:              N/A
00:24:05.758  Firmware Slot 1 Read-Only:             N/A
00:24:05.758  Firmware Activation Without Reset:     N/A
00:24:05.758  Multiple Update Detection Support:     N/A
00:24:05.758  Firmware Update Granularity:           No Information Provided
00:24:05.758  Per-Namespace SMART Log:               Yes
00:24:05.758  Asymmetric Namespace Access Log Page:  Supported
00:24:05.758  ANA Transition Time                 :  10 sec
00:24:05.758  
00:24:05.758  Asymmetric Namespace Access Capabilities
00:24:05.758    ANA Optimized State               : Supported
00:24:05.758    ANA Non-Optimized State           : Supported
00:24:05.758    ANA Inaccessible State            : Supported
00:24:05.758    ANA Persistent Loss State         : Supported
00:24:05.758    ANA Change State                  : Supported
00:24:05.758    ANAGRPID is not changed           : No
00:24:05.758    Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported
00:24:05.758  
00:24:05.758  ANA Group Identifier Maximum        : 128
00:24:05.758  Number of ANA Group Identifiers     : 128
00:24:05.759  Max Number of Allowed Namespaces    : 1024
00:24:05.759  Subsystem NQN:                         nqn.2016-06.io.spdk:testnqn
00:24:05.759  Command Effects Log Page:              Supported
00:24:05.759  Get Log Page Extended Data:            Supported
00:24:05.759  Telemetry Log Pages:                   Not Supported
00:24:05.759  Persistent Event Log Pages:            Not Supported
00:24:05.759  Supported Log Pages Log Page:          May Support
00:24:05.759  Commands Supported & Effects Log Page: Not Supported
00:24:05.759  Feature Identifiers & Effects Log Page:May Support
00:24:05.759  NVMe-MI Commands & Effects Log Page:   May Support
00:24:05.759  Data Area 4 for Telemetry Log:         Not Supported
00:24:05.759  Error Log Page Entries Supported:      128
00:24:05.759  Keep Alive:                            Supported
00:24:05.759  Keep Alive Granularity:                1000 ms
00:24:05.759  
00:24:05.759  NVM Command Set Attributes
00:24:05.759  ==========================
00:24:05.759  Submission Queue Entry Size
00:24:05.759    Max:                       64
00:24:05.759    Min:                       64
00:24:05.759  Completion Queue Entry Size
00:24:05.759    Max:                       16
00:24:05.759    Min:                       16
00:24:05.759  Number of Namespaces:        1024
00:24:05.759  Compare Command:             Not Supported
00:24:05.759  Write Uncorrectable Command: Not Supported
00:24:05.759  Dataset Management Command:  Supported
00:24:05.759  Write Zeroes Command:        Supported
00:24:05.759  Set Features Save Field:     Not Supported
00:24:05.759  Reservations:                Not Supported
00:24:05.759  Timestamp:                   Not Supported
00:24:05.759  Copy:                        Not Supported
00:24:05.759  Volatile Write Cache:        Present
00:24:05.759  Atomic Write Unit (Normal):  1
00:24:05.759  Atomic Write Unit (PFail):   1
00:24:05.759  Atomic Compare & Write Unit: 1
00:24:05.759  Fused Compare & Write:       Not Supported
00:24:05.759  Scatter-Gather List
00:24:05.759    SGL Command Set:           Supported
00:24:05.759    SGL Keyed:                 Not Supported
00:24:05.759    SGL Bit Bucket Descriptor: Not Supported
00:24:05.759    SGL Metadata Pointer:      Not Supported
00:24:05.759    Oversized SGL:             Not Supported
00:24:05.759    SGL Metadata Address:      Not Supported
00:24:05.759    SGL Offset:                Supported
00:24:05.759    Transport SGL Data Block:  Not Supported
00:24:05.759  Replay Protected Memory Block:  Not Supported
00:24:05.759  
00:24:05.759  Firmware Slot Information
00:24:05.759  =========================
00:24:05.759  Active slot:                 0
00:24:05.759  
00:24:05.759  Asymmetric Namespace Access
00:24:05.759  ===========================
00:24:05.759  Change Count                    : 0
00:24:05.759  Number of ANA Group Descriptors : 1
00:24:05.759  ANA Group Descriptor            : 0
00:24:05.759    ANA Group ID                  : 1
00:24:05.759    Number of NSID Values         : 1
00:24:05.759    Change Count                  : 0
00:24:05.759    ANA State                     : 1
00:24:05.759    Namespace Identifier          : 1
00:24:05.759  
00:24:05.759  Commands Supported and Effects
00:24:05.759  ==============================
00:24:05.759  Admin Commands
00:24:05.759  --------------
00:24:05.759                    Get Log Page (02h): Supported 
00:24:05.759                        Identify (06h): Supported 
00:24:05.759                           Abort (08h): Supported 
00:24:05.759                    Set Features (09h): Supported 
00:24:05.759                    Get Features (0Ah): Supported 
00:24:05.759      Asynchronous Event Request (0Ch): Supported 
00:24:05.759                      Keep Alive (18h): Supported 
00:24:05.759  I/O Commands
00:24:05.759  ------------
00:24:05.759                           Flush (00h): Supported 
00:24:05.759                           Write (01h): Supported LBA-Change 
00:24:05.759                            Read (02h): Supported 
00:24:05.759                    Write Zeroes (08h): Supported LBA-Change 
00:24:05.759              Dataset Management (09h): Supported 
00:24:05.759  
00:24:05.759  Error Log
00:24:05.759  =========
00:24:05.759  Entry: 0
00:24:05.759  Error Count:            0x3
00:24:05.759  Submission Queue Id:    0x0
00:24:05.759  Command Id:             0x5
00:24:05.759  Phase Bit:              0
00:24:05.759  Status Code:            0x2
00:24:05.759  Status Code Type:       0x0
00:24:05.759  Do Not Retry:           1
00:24:05.759  Error Location:         0x28
00:24:05.759  LBA:                    0x0
00:24:05.759  Namespace:              0x0
00:24:05.759  Vendor Log Page:        0x0
00:24:05.759  -----------
00:24:05.759  Entry: 1
00:24:05.759  Error Count:            0x2
00:24:05.759  Submission Queue Id:    0x0
00:24:05.759  Command Id:             0x5
00:24:05.759  Phase Bit:              0
00:24:05.759  Status Code:            0x2
00:24:05.759  Status Code Type:       0x0
00:24:05.759  Do Not Retry:           1
00:24:05.759  Error Location:         0x28
00:24:05.759  LBA:                    0x0
00:24:05.759  Namespace:              0x0
00:24:05.759  Vendor Log Page:        0x0
00:24:05.759  -----------
00:24:05.759  Entry: 2
00:24:05.759  Error Count:            0x1
00:24:05.759  Submission Queue Id:    0x0
00:24:05.759  Command Id:             0x4
00:24:05.759  Phase Bit:              0
00:24:05.759  Status Code:            0x2
00:24:05.759  Status Code Type:       0x0
00:24:05.759  Do Not Retry:           1
00:24:05.759  Error Location:         0x28
00:24:05.759  LBA:                    0x0
00:24:05.759  Namespace:              0x0
00:24:05.759  Vendor Log Page:        0x0
00:24:05.759  
00:24:05.759  Number of Queues
00:24:05.759  ================
00:24:05.759  Number of I/O Submission Queues:      128
00:24:05.759  Number of I/O Completion Queues:      128
00:24:05.759  
00:24:05.759  ZNS Specific Controller Data
00:24:05.759  ============================
00:24:05.759  Zone Append Size Limit:      0
00:24:05.759  
00:24:05.759  
00:24:05.759  Active Namespaces
00:24:05.759  =================
00:24:05.759  get_feature(0x05) failed
00:24:05.759  Namespace ID:1
00:24:05.759  Command Set Identifier:                NVM (00h)
00:24:05.759  Deallocate:                            Supported
00:24:05.759  Deallocated/Unwritten Error:           Not Supported
00:24:05.759  Deallocated Read Value:                Unknown
00:24:05.759  Deallocate in Write Zeroes:            Not Supported
00:24:05.759  Deallocated Guard Field:               0xFFFF
00:24:05.759  Flush:                                 Supported
00:24:05.759  Reservation:                           Not Supported
00:24:05.759  Namespace Sharing Capabilities:        Multiple Controllers
00:24:05.759  Size (in LBAs):                        1953525168 (931GiB)
00:24:05.759  Capacity (in LBAs):                    1953525168 (931GiB)
00:24:05.759  Utilization (in LBAs):                 1953525168 (931GiB)
00:24:05.759  UUID:                                  a5e21125-6bb7-4879-a751-d78ff456168f
00:24:05.759  Thin Provisioning:                     Not Supported
00:24:05.759  Per-NS Atomic Units:                   Yes
00:24:05.759    Atomic Boundary Size (Normal):       0
00:24:05.759    Atomic Boundary Size (PFail):        0
00:24:05.759    Atomic Boundary Offset:              0
00:24:05.759  NGUID/EUI64 Never Reused:              No
00:24:05.759  ANA group ID:                          1
00:24:05.759  Namespace Write Protected:             No
00:24:05.759  Number of LBA Formats:                 1
00:24:05.759  Current LBA Format:                    LBA Format #00
00:24:05.759  LBA Format #00: Data Size:   512  Metadata Size:     0
00:24:05.759  
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:05.759  rmmod nvme_tcp
00:24:05.759  rmmod nvme_fabrics
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:05.759   04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:05.759    04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]]
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:24:08.324   04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:24:09.261  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:24:09.261  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:24:09.261  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:24:09.261  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:24:09.261  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:24:09.261  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:24:09.261  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:24:09.261  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:24:09.261  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:24:10.197  0000:88:00.0 (8086 0a54): nvme -> vfio-pci
00:24:10.197  
00:24:10.197  real	0m9.991s
00:24:10.197  user	0m2.214s
00:24:10.197  sys	0m3.788s
00:24:10.197   04:14:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:10.197   04:14:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x
00:24:10.197  ************************************
00:24:10.197  END TEST nvmf_identify_kernel_target
00:24:10.197  ************************************
00:24:10.456   04:14:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp
00:24:10.456   04:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:10.456   04:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:10.456   04:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:24:10.456  ************************************
00:24:10.456  START TEST nvmf_auth_host
00:24:10.456  ************************************
00:24:10.456   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp
00:24:10.456  * Looking for test storage...
00:24:10.456  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-:
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-:
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<'
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:10.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:10.456  		--rc genhtml_branch_coverage=1
00:24:10.456  		--rc genhtml_function_coverage=1
00:24:10.456  		--rc genhtml_legend=1
00:24:10.456  		--rc geninfo_all_blocks=1
00:24:10.456  		--rc geninfo_unexecuted_blocks=1
00:24:10.456  		
00:24:10.456  		'
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:10.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:10.456  		--rc genhtml_branch_coverage=1
00:24:10.456  		--rc genhtml_function_coverage=1
00:24:10.456  		--rc genhtml_legend=1
00:24:10.456  		--rc geninfo_all_blocks=1
00:24:10.456  		--rc geninfo_unexecuted_blocks=1
00:24:10.456  		
00:24:10.456  		'
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:10.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:10.456  		--rc genhtml_branch_coverage=1
00:24:10.456  		--rc genhtml_function_coverage=1
00:24:10.456  		--rc genhtml_legend=1
00:24:10.456  		--rc geninfo_all_blocks=1
00:24:10.456  		--rc geninfo_unexecuted_blocks=1
00:24:10.456  		
00:24:10.456  		'
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:10.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:10.456  		--rc genhtml_branch_coverage=1
00:24:10.456  		--rc genhtml_function_coverage=1
00:24:10.456  		--rc genhtml_legend=1
00:24:10.456  		--rc geninfo_all_blocks=1
00:24:10.456  		--rc geninfo_unexecuted_blocks=1
00:24:10.456  		
00:24:10.456  		'
00:24:10.456   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:10.456    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:10.456     04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:10.456      04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:10.456      04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:10.456      04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:10.456      04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH
00:24:10.457      04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:10.457  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=()
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=()
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:10.457    04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable
00:24:10.457   04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=()
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=()
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=()
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=()
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=()
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=()
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=()
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:24:12.982  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:24:12.982  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:12.982   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:24:12.983  Found net devices under 0000:0a:00.0: cvl_0_0
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:24:12.983  Found net devices under 0000:0a:00.1: cvl_0_1
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:24:12.983  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:24:12.983  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms
00:24:12.983  
00:24:12.983  --- 10.0.0.2 ping statistics ---
00:24:12.983  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:12.983  rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:24:12.983  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:24:12.983  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms
00:24:12.983  
00:24:12.983  --- 10.0.0.1 ping statistics ---
00:24:12.983  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:12.983  rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=324797
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 324797
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 324797 ']'
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:12.983   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8f22a016cccb9161a1688189662d3f79
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uc2
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8f22a016cccb9161a1688189662d3f79 0
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8f22a016cccb9161a1688189662d3f79 0
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8f22a016cccb9161a1688189662d3f79
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uc2
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uc2
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.uc2
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GnL
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a 3
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a 3
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GnL
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GnL
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GnL
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=90b27af51541d9c69354a040bba43c3fd6d885038a82df6d
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.L5W
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 90b27af51541d9c69354a040bba43c3fd6d885038a82df6d 0
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 90b27af51541d9c69354a040bba43c3fd6d885038a82df6d 0
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=90b27af51541d9c69354a040bba43c3fd6d885038a82df6d
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.L5W
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.L5W
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.L5W
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b
00:24:13.243     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2SP
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b 2
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b 2
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2SP
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2SP
00:24:13.243   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2SP
00:24:13.243    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:24:13.244     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f4aefd29ff42b745e1ceacb62df50217
00:24:13.244     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.N33
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f4aefd29ff42b745e1ceacb62df50217 1
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f4aefd29ff42b745e1ceacb62df50217 1
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f4aefd29ff42b745e1ceacb62df50217
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.N33
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.N33
00:24:13.244   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.N33
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:24:13.244     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:24:13.244    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=15d7aa29a9af906bcec19a03fe18280c
00:24:13.244     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Tzj
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 15d7aa29a9af906bcec19a03fe18280c 1
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 15d7aa29a9af906bcec19a03fe18280c 1
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=15d7aa29a9af906bcec19a03fe18280c
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Tzj
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Tzj
00:24:13.502   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Tzj
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:24:13.502     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606
00:24:13.502     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.du9
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606 2
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606 2
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.du9
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.du9
00:24:13.502   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.du9
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:24:13.502     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b9293af70a2e0cd76bd932af7305dec
00:24:13.502     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fut
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b9293af70a2e0cd76bd932af7305dec 0
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b9293af70a2e0cd76bd932af7305dec 0
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.502    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b9293af70a2e0cd76bd932af7305dec
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fut
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fut
00:24:13.503   04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fut
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:24:13.503     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732
00:24:13.503     04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3Ii
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732 3
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732 3
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:24:13.503    04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:24:13.503    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3Ii
00:24:13.503    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3Ii
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3Ii
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]=
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 324797
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 324797 ']'
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:13.503  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:13.503   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uc2
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GnL ]]
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GnL
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.L5W
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2SP ]]
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2SP
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.069   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.N33
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Tzj ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tzj
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.du9
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fut ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fut
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3Ii
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:14.070    04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:24:14.070   04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:24:15.004  Waiting for block devices as requested
00:24:15.004  0000:88:00.0 (8086 0a54): vfio-pci -> nvme
00:24:15.004  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:24:15.261  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:24:15.261  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:24:15.519  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:24:15.519  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:24:15.519  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:24:15.519  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:24:15.777  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:24:15.777  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:24:15.777  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:24:15.777  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:24:16.035  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:24:16.035  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:24:16.035  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:24:16.035  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:24:16.293  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:24:16.551   04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:24:16.551  No valid GPT data, bailing
00:24:16.551    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt=
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/
00:24:16.551   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420
00:24:16.808  
00:24:16.808  Discovery Log Number of Records 2, Generation counter 2
00:24:16.808  =====Discovery Log Entry 0======
00:24:16.808  trtype:  tcp
00:24:16.808  adrfam:  ipv4
00:24:16.808  subtype: current discovery subsystem
00:24:16.808  treq:    not specified, sq flow control disable supported
00:24:16.808  portid:  1
00:24:16.808  trsvcid: 4420
00:24:16.808  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:24:16.808  traddr:  10.0.0.1
00:24:16.808  eflags:  none
00:24:16.808  sectype: none
00:24:16.808  =====Discovery Log Entry 1======
00:24:16.808  trtype:  tcp
00:24:16.808  adrfam:  ipv4
00:24:16.808  subtype: nvme subsystem
00:24:16.808  treq:    not specified, sq flow control disable supported
00:24:16.808  portid:  1
00:24:16.808  trsvcid: 4420
00:24:16.808  subnqn:  nqn.2024-02.io.spdk:cnode0
00:24:16.808  traddr:  10.0.0.1
00:24:16.808  eflags:  none
00:24:16.808  sectype: none
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:16.808   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:16.808    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:24:16.808    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512
00:24:16.808    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:16.809    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:16.809   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.065  nvme0n1
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.065  nvme0n1
00:24:17.065   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.065    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:17.066    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.066    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.066    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:17.066    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.323  nvme0n1
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:17.323    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.323   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:17.580   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:17.581    04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.581   04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.581  nvme0n1
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:17.581    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.581   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.838  nvme0n1
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.838    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:17.838    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.838    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.838    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:17.838    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:17.838   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.838    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:17.838    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:17.839    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:17.839   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:17.839   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.839   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.096  nvme0n1
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.096    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:18.096    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.096    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:18.096    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.096    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:18.096   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:18.353   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:18.353   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:18.353   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:18.353   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:18.354    04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.354   04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.613  nvme0n1
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:18.613    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.613   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.872  nvme0n1
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.872    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:18.872    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.872    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.872    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:18.872    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:18.872   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:18.873    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:18.873   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:18.873   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:18.873   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.131  nvme0n1
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:19.131    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.131   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.390  nvme0n1
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:19.390    04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.390   04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.648  nvme0n1
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.648    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:19.648    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.648    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.648    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:19.648    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:19.648   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:20.213    04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.213   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.471  nvme0n1
00:24:20.471   04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.471    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:20.471    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.471    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:20.471    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.471    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.471   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:20.471   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:20.471   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.471   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.728   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:20.728    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:20.729    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:20.729    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:20.729    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:20.729   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:20.729   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.729   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.986  nvme0n1
00:24:20.986   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.986    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:20.986    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.986    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.986    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:20.986    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.986   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:20.986   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:20.986   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.986   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.986   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.986   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:20.987    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:20.987   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.245  nvme0n1
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:21.245    04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.245   04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.503  nvme0n1
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:21.503   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:21.503    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:21.760    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:21.760    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:21.760    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:21.760    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:21.760   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:21.760   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:21.760   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:22.017  nvme0n1
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:22.017    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:22.017    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:22.017    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:22.017    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:22.017    04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:22.017   04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:23.913    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:23.913   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.170  nvme0n1
00:24:24.170   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.170    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:24.170    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:24.170    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.170    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.170    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:24.428    04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.428   04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.992  nvme0n1
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:24.992    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:24.992   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:25.250  nvme0n1
00:24:25.250   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:25.250    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:25.250    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:25.250    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:25.250    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:25.508    04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:25.508   04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.074  nvme0n1
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:26.074    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.074   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.641  nvme0n1
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.641    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:26.641    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.641    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.641    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:26.641    04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:26.641   04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:26.641    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.641   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:27.572  nvme0n1
00:24:27.572   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.572    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:27.572    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.572    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:27.572    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:27.572    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.572   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:27.572   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:27.573    04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.573   04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:28.505  nvme0n1
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:28.505    04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.505   04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:29.070  nvme0n1
00:24:29.070   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:29.070    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:29.070    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:29.070    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:29.070    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:29.328    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:29.328   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:29.329    04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:29.329   04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:30.260  nvme0n1
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:30.260    04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:30.260   04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.191  nvme0n1
00:24:31.191   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.191    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:31.191    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.191    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.191    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:31.191    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.191   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:31.191   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.192  nvme0n1
00:24:31.192   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.192    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.450   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.450    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:31.450    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:31.450    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:31.450    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:31.450    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:31.451   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:31.451   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.451   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.451  nvme0n1
00:24:31.451   04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.451    04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.451   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.709  nvme0n1
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:31.709    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.709   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.968  nvme0n1
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:31.968    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:31.968   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.226  nvme0n1
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:32.226    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.226   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.484  nvme0n1
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:32.484    04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.484   04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.742  nvme0n1
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:32.742    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:32.742   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.000  nvme0n1
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.000    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:33.000    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.000    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.000    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:33.000    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:33.000   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:33.001    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.001   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.258  nvme0n1
00:24:33.258   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.258    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:33.258    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.258    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:33.258    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:33.259    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.259   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.516  nvme0n1
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.516    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:33.516    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.516    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:33.516    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.516    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.516   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.516    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:33.516    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:33.517    04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:33.517   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:33.517   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.517   04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.773  nvme0n1
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:33.773    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.773   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.029  nvme0n1
00:24:34.029   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.029    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:34.029    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.029    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.029    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:34.029    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.286   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.286    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:34.286    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:34.287    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:34.287   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:34.287   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.287   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.544  nvme0n1
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:34.544    04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.544   04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.802  nvme0n1
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:34.802    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:34.802   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.060  nvme0n1
00:24:35.060   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.060    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:35.060    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.060    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:35.060    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.060    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.060   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:35.060   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:35.060   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.060   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:35.319    04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.319   04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.578  nvme0n1
00:24:35.578   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.578    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:35.578    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.578    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.578    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:35.837    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:35.837   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:36.403  nvme0n1
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:36.403    04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:36.403   04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.009  nvme0n1
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:37.009    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.009   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.576  nvme0n1
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:37.576    04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:37.576   04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:38.143  nvme0n1
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:38.143    04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:38.143   04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:39.077  nvme0n1
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:39.077    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:39.077    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:39.077    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:39.077    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:39.077    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:39.077   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:24:39.078   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:39.078   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:39.078   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:39.078    04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:39.078   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:39.078   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:39.078   04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.011  nvme0n1
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.011   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:40.011    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:40.012    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:40.012    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:40.012    04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:40.012   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:40.012   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.012   04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.946  nvme0n1
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:40.946    04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:40.946   04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:41.879  nvme0n1
00:24:41.879   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:41.879    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:41.879    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:41.879    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:41.879    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:41.879    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:41.879   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:41.879   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:41.879   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:41.879   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:41.879   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:41.880    04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:41.880   04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.814  nvme0n1
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.814    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:42.814    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.814    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:42.814    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.814    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:42.814   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.815  nvme0n1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:42.815    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:42.815   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.073  nvme0n1
00:24:43.073   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:43.074    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.074   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.332  nvme0n1
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.332    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:43.332    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.332    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.332    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:43.332    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:43.332   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:43.333    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.333   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.591  nvme0n1
00:24:43.591   04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.591    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:43.591    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.591    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.591    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:43.591    04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:43.591    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.591   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.850  nvme0n1
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:43.850    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.850   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.109  nvme0n1
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.109    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:44.109    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.109    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.109    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:44.109    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:44.109   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:44.110    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.110   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.368  nvme0n1
00:24:44.368   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.368    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:44.368    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.368    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:44.368    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.368    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.368   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:44.368   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:44.368   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:44.369    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.369   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.628  nvme0n1
00:24:44.628   04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.628    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:44.628    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:44.628    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.628    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.628    04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:44.628    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.628   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.887  nvme0n1
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:44.887    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:44.887   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.144  nvme0n1
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:45.144    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.144   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.402  nvme0n1
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.402   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:45.402    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:45.403    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:45.403    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:45.403    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:45.403    04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:45.403   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:45.403   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.403   04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.659  nvme0n1
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.659    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:45.659    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.659    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.659    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:45.659    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:45.659   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:45.660   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:45.917    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:45.917   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:45.917   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.917   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.174  nvme0n1
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:46.174    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.174   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.431  nvme0n1
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.431    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:46.431    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.431    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:46.431    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.431    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:24:46.431   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:46.432    04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.432   04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.689  nvme0n1
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:46.689    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:46.689   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.253  nvme0n1
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:47.253    04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.253   04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.819  nvme0n1
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:47.819    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:47.819   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.385  nvme0n1
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:48.385    04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.385   04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.950  nvme0n1
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.950    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:48.950    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.950    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.950    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:48.950    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:48.950   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:48.950    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:48.950    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:48.951    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:48.951   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:48.951   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:48.951   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:49.515  nvme0n1
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:49.515    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:49.515    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:49.515    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:49.515    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:49.515    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15:
00:24:49.515   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]]
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=:
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:49.516    04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:49.516   04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:50.449  nvme0n1
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:50.449    04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:50.449   04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:51.383  nvme0n1
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:51.383    04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:51.383   04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:52.317  nvme0n1
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==:
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]]
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X:
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:52.317    04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:52.317   04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:53.251  nvme0n1
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=:
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:53.251    04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:53.251   04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.186  nvme0n1
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.186  request:
00:24:54.186  {
00:24:54.186  "name": "nvme0",
00:24:54.186  "trtype": "tcp",
00:24:54.186  "traddr": "10.0.0.1",
00:24:54.186  "adrfam": "ipv4",
00:24:54.186  "trsvcid": "4420",
00:24:54.186  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:24:54.186  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:24:54.186  "prchk_reftag": false,
00:24:54.186  "prchk_guard": false,
00:24:54.186  "hdgst": false,
00:24:54.186  "ddgst": false,
00:24:54.186  "allow_unrecognized_csi": false,
00:24:54.186  "method": "bdev_nvme_attach_controller",
00:24:54.186  "req_id": 1
00:24:54.186  }
00:24:54.186  Got JSON-RPC error response
00:24:54.186  response:
00:24:54.186  {
00:24:54.186  "code": -5,
00:24:54.186  "message": "Input/output error"
00:24:54.186  }
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.186   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 ))
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip
00:24:54.186    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.187    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.187   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.444  request:
00:24:54.444  {
00:24:54.444  "name": "nvme0",
00:24:54.444  "trtype": "tcp",
00:24:54.444  "traddr": "10.0.0.1",
00:24:54.444  "adrfam": "ipv4",
00:24:54.444  "trsvcid": "4420",
00:24:54.444  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:24:54.444  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:24:54.444  "prchk_reftag": false,
00:24:54.444  "prchk_guard": false,
00:24:54.444  "hdgst": false,
00:24:54.444  "ddgst": false,
00:24:54.444  "dhchap_key": "key2",
00:24:54.444  "allow_unrecognized_csi": false,
00:24:54.444  "method": "bdev_nvme_attach_controller",
00:24:54.444  "req_id": 1
00:24:54.444  }
00:24:54.444  Got JSON-RPC error response
00:24:54.444  response:
00:24:54.444  {
00:24:54.444  "code": -5,
00:24:54.444  "message": "Input/output error"
00:24:54.444  }
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 ))
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.444    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.444  request:
00:24:54.444  {
00:24:54.444  "name": "nvme0",
00:24:54.444  "trtype": "tcp",
00:24:54.444  "traddr": "10.0.0.1",
00:24:54.444  "adrfam": "ipv4",
00:24:54.444  "trsvcid": "4420",
00:24:54.444  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:24:54.444  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:24:54.444  "prchk_reftag": false,
00:24:54.444  "prchk_guard": false,
00:24:54.444  "hdgst": false,
00:24:54.444  "ddgst": false,
00:24:54.444  "dhchap_key": "key1",
00:24:54.444  "dhchap_ctrlr_key": "ckey2",
00:24:54.444  "allow_unrecognized_csi": false,
00:24:54.444  "method": "bdev_nvme_attach_controller",
00:24:54.444  "req_id": 1
00:24:54.444  }
00:24:54.444  Got JSON-RPC error response
00:24:54.444  response:
00:24:54.444  {
00:24:54.444  "code": -5,
00:24:54.444  "message": "Input/output error"
00:24:54.444  }
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:24:54.444   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:24:54.445   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:24:54.445   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:54.445    04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:54.445   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:24:54.445   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.445   04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.703  nvme0n1
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name'
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.703  request:
00:24:54.703  {
00:24:54.703  "name": "nvme0",
00:24:54.703  "dhchap_key": "key1",
00:24:54.703  "dhchap_ctrlr_key": "ckey2",
00:24:54.703  "method": "bdev_nvme_set_keys",
00:24:54.703  "req_id": 1
00:24:54.703  }
00:24:54.703  Got JSON-RPC error response
00:24:54.703  response:
00:24:54.703  {
00:24:54.703  "code": -13,
00:24:54.703  "message": "Permission denied"
00:24:54.703  }
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:54.703    04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 ))
00:24:54.703   04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 ))
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==:
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]]
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==:
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:56.076    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:56.076  nvme0n1
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF:
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]]
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo:
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:24:56.076   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:56.077    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:56.077  request:
00:24:56.077  {
00:24:56.077  "name": "nvme0",
00:24:56.077  "dhchap_key": "key2",
00:24:56.077  "dhchap_ctrlr_key": "ckey1",
00:24:56.077  "method": "bdev_nvme_set_keys",
00:24:56.077  "req_id": 1
00:24:56.077  }
00:24:56.077  Got JSON-RPC error response
00:24:56.077  response:
00:24:56.077  {
00:24:56.077  "code": -13,
00:24:56.077  "message": "Permission denied"
00:24:56.077  }
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:24:56.077    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:24:56.077    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:24:56.077    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:56.077    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:56.077    04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 ))
00:24:56.077   04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s
00:24:57.009    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:24:57.009    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:24:57.009    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:57.009    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:24:57.009    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 ))
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:57.266  rmmod nvme_tcp
00:24:57.266  rmmod nvme_fabrics
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 324797 ']'
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 324797
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 324797 ']'
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 324797
00:24:57.266    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:57.266    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324797
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324797'
00:24:57.266  killing process with pid 324797
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 324797
00:24:57.266   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 324797
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:57.523   04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:57.523    04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]]
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:24:59.425   04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:25:00.802  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:25:00.802  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:25:00.802  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:25:00.802  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:25:00.802  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:25:00.802  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:25:00.802  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:25:00.802  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:25:00.802  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:25:01.736  0000:88:00.0 (8086 0a54): nvme -> vfio-pci
00:25:01.994   04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.uc2 /tmp/spdk.key-null.L5W /tmp/spdk.key-sha256.N33 /tmp/spdk.key-sha384.du9 /tmp/spdk.key-sha512.3Ii /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log
00:25:01.994   04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:25:02.934  0000:00:04.7 (8086 0e27): Already using the vfio-pci driver
00:25:02.934  0000:88:00.0 (8086 0a54): Already using the vfio-pci driver
00:25:02.934  0000:00:04.6 (8086 0e26): Already using the vfio-pci driver
00:25:02.934  0000:00:04.5 (8086 0e25): Already using the vfio-pci driver
00:25:02.934  0000:00:04.4 (8086 0e24): Already using the vfio-pci driver
00:25:02.934  0000:00:04.3 (8086 0e23): Already using the vfio-pci driver
00:25:02.934  0000:00:04.2 (8086 0e22): Already using the vfio-pci driver
00:25:02.934  0000:00:04.1 (8086 0e21): Already using the vfio-pci driver
00:25:02.934  0000:00:04.0 (8086 0e20): Already using the vfio-pci driver
00:25:02.934  0000:80:04.7 (8086 0e27): Already using the vfio-pci driver
00:25:02.934  0000:80:04.6 (8086 0e26): Already using the vfio-pci driver
00:25:02.934  0000:80:04.5 (8086 0e25): Already using the vfio-pci driver
00:25:02.934  0000:80:04.4 (8086 0e24): Already using the vfio-pci driver
00:25:02.934  0000:80:04.3 (8086 0e23): Already using the vfio-pci driver
00:25:03.193  0000:80:04.2 (8086 0e22): Already using the vfio-pci driver
00:25:03.193  0000:80:04.1 (8086 0e21): Already using the vfio-pci driver
00:25:03.193  0000:80:04.0 (8086 0e20): Already using the vfio-pci driver
00:25:03.193  
00:25:03.193  real	0m52.860s
00:25:03.193  user	0m50.363s
00:25:03.193  sys	0m6.168s
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:25:03.193  ************************************
00:25:03.193  END TEST nvmf_auth_host
00:25:03.193  ************************************
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]]
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:03.193  ************************************
00:25:03.193  START TEST nvmf_digest
00:25:03.193  ************************************
00:25:03.193   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp
00:25:03.193  * Looking for test storage...
00:25:03.193  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:25:03.193    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:03.193     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version
00:25:03.193     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-:
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-:
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<'
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:03.452  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:03.452  		--rc genhtml_branch_coverage=1
00:25:03.452  		--rc genhtml_function_coverage=1
00:25:03.452  		--rc genhtml_legend=1
00:25:03.452  		--rc geninfo_all_blocks=1
00:25:03.452  		--rc geninfo_unexecuted_blocks=1
00:25:03.452  		
00:25:03.452  		'
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:03.452  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:03.452  		--rc genhtml_branch_coverage=1
00:25:03.452  		--rc genhtml_function_coverage=1
00:25:03.452  		--rc genhtml_legend=1
00:25:03.452  		--rc geninfo_all_blocks=1
00:25:03.452  		--rc geninfo_unexecuted_blocks=1
00:25:03.452  		
00:25:03.452  		'
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:03.452  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:03.452  		--rc genhtml_branch_coverage=1
00:25:03.452  		--rc genhtml_function_coverage=1
00:25:03.452  		--rc genhtml_legend=1
00:25:03.452  		--rc geninfo_all_blocks=1
00:25:03.452  		--rc geninfo_unexecuted_blocks=1
00:25:03.452  		
00:25:03.452  		'
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:03.452  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:03.452  		--rc genhtml_branch_coverage=1
00:25:03.452  		--rc genhtml_function_coverage=1
00:25:03.452  		--rc genhtml_legend=1
00:25:03.452  		--rc geninfo_all_blocks=1
00:25:03.452  		--rc geninfo_unexecuted_blocks=1
00:25:03.452  		
00:25:03.452  		'
00:25:03.452   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:03.452     04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:03.452      04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:03.452      04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:03.452      04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:03.452      04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH
00:25:03.452      04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0
00:25:03.452    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:03.453  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]]
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:03.453    04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable
00:25:03.453   04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=()
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=()
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=()
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=()
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=()
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=()
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=()
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:05.986   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:25:05.987  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:25:05.987  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:25:05.987  Found net devices under 0000:0a:00.0: cvl_0_0
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:25:05.987  Found net devices under 0000:0a:00.1: cvl_0_1
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:25:05.987  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:05.987  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms
00:25:05.987  
00:25:05.987  --- 10.0.0.2 ping statistics ---
00:25:05.987  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:05.987  rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:25:05.987  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:05.987  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms
00:25:05.987  
00:25:05.987  --- 10.0.0.1 ping statistics ---
00:25:05.987  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:05.987  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]]
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:05.987   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:25:05.987  ************************************
00:25:05.987  START TEST nvmf_digest_clean
00:25:05.988  ************************************
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]]
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc")
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=335291
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 335291
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 335291 ']'
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:05.988  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:05.988  [2024-12-09 04:15:34.287385] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:05.988  [2024-12-09 04:15:34.287461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:05.988  [2024-12-09 04:15:34.357507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:05.988  [2024-12-09 04:15:34.409577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:05.988  [2024-12-09 04:15:34.409640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:05.988  [2024-12-09 04:15:34.409668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:05.988  [2024-12-09 04:15:34.409688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:05.988  [2024-12-09 04:15:34.409697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:05.988  [2024-12-09 04:15:34.410268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]]
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.988   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:06.246  null0
00:25:06.246  [2024-12-09 04:15:34.649500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:06.246  [2024-12-09 04:15:34.673784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=335311
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 335311 /var/tmp/bperf.sock
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 335311 ']'
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:06.246  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:25:06.246   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:06.246  [2024-12-09 04:15:34.725480] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:06.246  [2024-12-09 04:15:34.725557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335311 ]
00:25:06.246  [2024-12-09 04:15:34.790419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:06.503  [2024-12-09 04:15:34.852227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:06.503   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:06.503   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:25:06.503   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:25:06.503   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:25:06.503   04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:25:07.068   04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:07.068   04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:07.327  nvme0n1
00:25:07.327   04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:25:07.327   04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:07.327  Running I/O for 2 seconds...
00:25:09.630      18840.00 IOPS,    73.59 MiB/s
[2024-12-09T03:15:38.206Z]     18929.50 IOPS,    73.94 MiB/s
00:25:09.630                                                                                                  Latency(us)
00:25:09.630  
[2024-12-09T03:15:38.206Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:09.630  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:25:09.630  	 nvme0n1             :       2.01   18934.54      73.96       0.00     0.00    6751.66    3179.71   14078.10
00:25:09.630  
[2024-12-09T03:15:38.206Z]  ===================================================================================================================
00:25:09.630  
[2024-12-09T03:15:38.206Z]  Total                       :              18934.54      73.96       0.00     0.00    6751.66    3179.71   14078.10
00:25:09.630  {
00:25:09.630    "results": [
00:25:09.630      {
00:25:09.630        "job": "nvme0n1",
00:25:09.630        "core_mask": "0x2",
00:25:09.630        "workload": "randread",
00:25:09.630        "status": "finished",
00:25:09.630        "queue_depth": 128,
00:25:09.630        "io_size": 4096,
00:25:09.630        "runtime": 2.006439,
00:25:09.630        "iops": 18934.54024767262,
00:25:09.630        "mibps": 73.96304784247117,
00:25:09.630        "io_failed": 0,
00:25:09.630        "io_timeout": 0,
00:25:09.630        "avg_latency_us": 6751.655372568747,
00:25:09.630        "min_latency_us": 3179.7096296296295,
00:25:09.630        "max_latency_us": 14078.103703703704
00:25:09.630      }
00:25:09.630    ],
00:25:09.630    "core_count": 1
00:25:09.630  }
00:25:09.630   04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:25:09.630    04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:25:09.630    04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:25:09.630    04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:25:09.630    04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:25:09.630  			| select(.opcode=="crc32c")
00:25:09.630  			| "\(.module_name) \(.executed)"'
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 335311
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 335311 ']'
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 335311
00:25:09.630    04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:25:09.630   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:09.630    04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335311
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335311'
00:25:09.888  killing process with pid 335311
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 335311
00:25:09.888  Received shutdown signal, test time was about 2.000000 seconds
00:25:09.888  
00:25:09.888                                                                                                  Latency(us)
00:25:09.888  
[2024-12-09T03:15:38.464Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:09.888  
[2024-12-09T03:15:38.464Z]  ===================================================================================================================
00:25:09.888  
[2024-12-09T03:15:38.464Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 335311
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=335836
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 335836 /var/tmp/bperf.sock
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 335836 ']'
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:09.888  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:09.888   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:10.146  [2024-12-09 04:15:38.505599] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:10.146  [2024-12-09 04:15:38.505685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335836 ]
00:25:10.146  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:10.146  Zero copy mechanism will not be used.
00:25:10.146  [2024-12-09 04:15:38.572007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:10.147  [2024-12-09 04:15:38.628776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:10.405   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:10.405   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:25:10.405   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:25:10.405   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:25:10.405   04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:25:10.664   04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:10.664   04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:10.922  nvme0n1
00:25:10.922   04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:25:10.922   04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:11.180  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:11.180  Zero copy mechanism will not be used.
00:25:11.180  Running I/O for 2 seconds...
00:25:13.054       5669.00 IOPS,   708.62 MiB/s
[2024-12-09T03:15:41.630Z]      5821.50 IOPS,   727.69 MiB/s
00:25:13.054                                                                                                  Latency(us)
00:25:13.054  
[2024-12-09T03:15:41.630Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:13.054  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:25:13.054  	 nvme0n1             :       2.00    5818.38     727.30       0.00     0.00    2745.79     703.91    7039.05
00:25:13.054  
[2024-12-09T03:15:41.630Z]  ===================================================================================================================
00:25:13.054  
[2024-12-09T03:15:41.630Z]  Total                       :               5818.38     727.30       0.00     0.00    2745.79     703.91    7039.05
00:25:13.054  {
00:25:13.054    "results": [
00:25:13.054      {
00:25:13.054        "job": "nvme0n1",
00:25:13.054        "core_mask": "0x2",
00:25:13.054        "workload": "randread",
00:25:13.054        "status": "finished",
00:25:13.054        "queue_depth": 16,
00:25:13.054        "io_size": 131072,
00:25:13.054        "runtime": 2.003821,
00:25:13.054        "iops": 5818.383977411156,
00:25:13.054        "mibps": 727.2979971763945,
00:25:13.054        "io_failed": 0,
00:25:13.054        "io_timeout": 0,
00:25:13.054        "avg_latency_us": 2745.78694189515,
00:25:13.054        "min_latency_us": 703.9051851851851,
00:25:13.054        "max_latency_us": 7039.051851851852
00:25:13.054      }
00:25:13.054    ],
00:25:13.054    "core_count": 1
00:25:13.055  }
00:25:13.055   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:25:13.055    04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:25:13.055    04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:25:13.055    04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:25:13.055  			| select(.opcode=="crc32c")
00:25:13.055  			| "\(.module_name) \(.executed)"'
00:25:13.055    04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 335836
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 335836 ']'
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 335836
00:25:13.313    04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:25:13.313   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:13.313    04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335836
00:25:13.571   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:13.571   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:13.571   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335836'
00:25:13.571  killing process with pid 335836
00:25:13.571   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 335836
00:25:13.571  Received shutdown signal, test time was about 2.000000 seconds
00:25:13.571  
00:25:13.571                                                                                                  Latency(us)
00:25:13.571  
[2024-12-09T03:15:42.147Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:13.571  
[2024-12-09T03:15:42.147Z]  ===================================================================================================================
00:25:13.571  
[2024-12-09T03:15:42.147Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:13.571   04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 335836
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=336253
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 336253 /var/tmp/bperf.sock
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 336253 ']'
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:13.571  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:13.571   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:13.830  [2024-12-09 04:15:42.180898] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:13.830  [2024-12-09 04:15:42.180971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336253 ]
00:25:13.830  [2024-12-09 04:15:42.245924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:13.830  [2024-12-09 04:15:42.300831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:13.830   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:13.830   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:25:13.830   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:25:13.830   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:25:13.830   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:25:14.397   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:14.397   04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:14.655  nvme0n1
00:25:14.655   04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:25:14.655   04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:14.913  Running I/O for 2 seconds...
00:25:16.779      19274.00 IOPS,    75.29 MiB/s
[2024-12-09T03:15:45.355Z]     19041.00 IOPS,    74.38 MiB/s
00:25:16.779                                                                                                  Latency(us)
00:25:16.779  
[2024-12-09T03:15:45.355Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:16.779  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:25:16.779  	 nvme0n1             :       2.01   19043.58      74.39       0.00     0.00    6705.95    2754.94    8980.86
00:25:16.779  
[2024-12-09T03:15:45.355Z]  ===================================================================================================================
00:25:16.779  
[2024-12-09T03:15:45.355Z]  Total                       :              19043.58      74.39       0.00     0.00    6705.95    2754.94    8980.86
00:25:16.779  {
00:25:16.779    "results": [
00:25:16.779      {
00:25:16.779        "job": "nvme0n1",
00:25:16.779        "core_mask": "0x2",
00:25:16.779        "workload": "randwrite",
00:25:16.779        "status": "finished",
00:25:16.779        "queue_depth": 128,
00:25:16.779        "io_size": 4096,
00:25:16.779        "runtime": 2.008131,
00:25:16.779        "iops": 19043.578332290075,
00:25:16.779        "mibps": 74.3889778605081,
00:25:16.779        "io_failed": 0,
00:25:16.779        "io_timeout": 0,
00:25:16.779        "avg_latency_us": 6705.9450784962055,
00:25:16.779        "min_latency_us": 2754.9392592592594,
00:25:16.779        "max_latency_us": 8980.85925925926
00:25:16.779      }
00:25:16.779    ],
00:25:16.779    "core_count": 1
00:25:16.779  }
00:25:16.779   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:25:16.779    04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:25:16.779    04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:25:16.779    04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:25:16.779  			| select(.opcode=="crc32c")
00:25:16.779  			| "\(.module_name) \(.executed)"'
00:25:16.779    04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 336253
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 336253 ']'
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 336253
00:25:17.036    04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:25:17.036   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:17.036    04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336253
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336253'
00:25:17.293  killing process with pid 336253
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 336253
00:25:17.293  Received shutdown signal, test time was about 2.000000 seconds
00:25:17.293  
00:25:17.293                                                                                                  Latency(us)
00:25:17.293  
[2024-12-09T03:15:45.869Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:17.293  
[2024-12-09T03:15:45.869Z]  ===================================================================================================================
00:25:17.293  
[2024-12-09T03:15:45.869Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 336253
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=336656
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 336656 /var/tmp/bperf.sock
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 336656 ']'
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:17.293  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:17.293   04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:17.552  [2024-12-09 04:15:45.909919] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:17.552  [2024-12-09 04:15:45.909995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336656 ]
00:25:17.552  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:17.552  Zero copy mechanism will not be used.
00:25:17.552  [2024-12-09 04:15:45.975826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:17.552  [2024-12-09 04:15:46.033457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:17.810   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:17.810   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:25:17.810   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:25:17.810   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:25:17.810   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:25:18.068   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:18.068   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:18.325  nvme0n1
00:25:18.325   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:25:18.325   04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:18.583  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:18.583  Zero copy mechanism will not be used.
00:25:18.583  Running I/O for 2 seconds...
00:25:20.444       6235.00 IOPS,   779.38 MiB/s
[2024-12-09T03:15:49.020Z]      6392.00 IOPS,   799.00 MiB/s
00:25:20.444                                                                                                  Latency(us)
00:25:20.444  
[2024-12-09T03:15:49.020Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:20.444  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:25:20.444  	 nvme0n1             :       2.00    6390.06     798.76       0.00     0.00    2496.29    1808.31    8980.86
00:25:20.444  
[2024-12-09T03:15:49.020Z]  ===================================================================================================================
00:25:20.444  
[2024-12-09T03:15:49.020Z]  Total                       :               6390.06     798.76       0.00     0.00    2496.29    1808.31    8980.86
00:25:20.444  {
00:25:20.444    "results": [
00:25:20.444      {
00:25:20.444        "job": "nvme0n1",
00:25:20.444        "core_mask": "0x2",
00:25:20.444        "workload": "randwrite",
00:25:20.444        "status": "finished",
00:25:20.444        "queue_depth": 16,
00:25:20.444        "io_size": 131072,
00:25:20.444        "runtime": 2.003892,
00:25:20.444        "iops": 6390.064933639138,
00:25:20.444        "mibps": 798.7581167048922,
00:25:20.444        "io_failed": 0,
00:25:20.444        "io_timeout": 0,
00:25:20.444        "avg_latency_us": 2496.2911381260215,
00:25:20.444        "min_latency_us": 1808.3081481481481,
00:25:20.444        "max_latency_us": 8980.85925925926
00:25:20.444      }
00:25:20.444    ],
00:25:20.444    "core_count": 1
00:25:20.444  }
00:25:20.444   04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:25:20.444    04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:25:20.444    04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:25:20.444    04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:25:20.444    04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:25:20.444  			| select(.opcode=="crc32c")
00:25:20.444  			| "\(.module_name) \(.executed)"'
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 336656
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 336656 ']'
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 336656
00:25:20.701    04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:25:20.701   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:20.701    04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336656
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336656'
00:25:20.959  killing process with pid 336656
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 336656
00:25:20.959  Received shutdown signal, test time was about 2.000000 seconds
00:25:20.959  
00:25:20.959                                                                                                  Latency(us)
00:25:20.959  
[2024-12-09T03:15:49.535Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:20.959  
[2024-12-09T03:15:49.535Z]  ===================================================================================================================
00:25:20.959  
[2024-12-09T03:15:49.535Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 336656
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 335291
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 335291 ']'
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 335291
00:25:20.959    04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:25:20.959   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:20.959    04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335291
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335291'
00:25:21.217  killing process with pid 335291
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 335291
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 335291
00:25:21.217  
00:25:21.217  real	0m15.521s
00:25:21.217  user	0m31.024s
00:25:21.217  sys	0m4.362s
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:25:21.217  ************************************
00:25:21.217  END TEST nvmf_digest_clean
00:25:21.217  ************************************
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:21.217   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:25:21.475  ************************************
00:25:21.475  START TEST nvmf_digest_error
00:25:21.475  ************************************
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=337208
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 337208
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 337208 ']'
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:21.475  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:21.475   04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:21.475  [2024-12-09 04:15:49.868661] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:21.475  [2024-12-09 04:15:49.868769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:21.475  [2024-12-09 04:15:49.941679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:21.475  [2024-12-09 04:15:49.999707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:21.475  [2024-12-09 04:15:49.999763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:21.475  [2024-12-09 04:15:49.999794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:21.475  [2024-12-09 04:15:49.999806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:21.475  [2024-12-09 04:15:49.999816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:21.475  [2024-12-09 04:15:50.000460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:21.739  [2024-12-09 04:15:50.137312] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:21.739  null0
00:25:21.739  [2024-12-09 04:15:50.258190] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:21.739  [2024-12-09 04:15:50.282443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=337238
00:25:21.739   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 337238 /var/tmp/bperf.sock
00:25:21.740   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z
00:25:21.740   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 337238 ']'
00:25:21.740   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:21.740   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:21.740   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:21.740  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:21.740   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:21.740   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:21.997  [2024-12-09 04:15:50.330649] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:21.997  [2024-12-09 04:15:50.330728] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337238 ]
00:25:21.997  [2024-12-09 04:15:50.400211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:21.997  [2024-12-09 04:15:50.458342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:22.254   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:22.254   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:25:22.254   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:22.254   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:22.512   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:25:22.512   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:22.512   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:22.512   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:22.512   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:22.512   04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:23.078  nvme0n1
00:25:23.078   04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:25:23.078   04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:23.078   04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:23.078   04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:23.078   04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:25:23.078   04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:23.078  Running I/O for 2 seconds...
00:25:23.078  [2024-12-09 04:15:51.581886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.078  [2024-12-09 04:15:51.581935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.078  [2024-12-09 04:15:51.581957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.078  [2024-12-09 04:15:51.593571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.078  [2024-12-09 04:15:51.593617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.078  [2024-12-09 04:15:51.593635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.078  [2024-12-09 04:15:51.610076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.078  [2024-12-09 04:15:51.610106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.078  [2024-12-09 04:15:51.610137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.078  [2024-12-09 04:15:51.625580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.078  [2024-12-09 04:15:51.625609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.078  [2024-12-09 04:15:51.625625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.078  [2024-12-09 04:15:51.638477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.078  [2024-12-09 04:15:51.638510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.078  [2024-12-09 04:15:51.638528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.078  [2024-12-09 04:15:51.650017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.078  [2024-12-09 04:15:51.650047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.078  [2024-12-09 04:15:51.650064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.663579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.663607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.663623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.676619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.676651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.676668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.688576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.688625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.688643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.701497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.701527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.701550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.713729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.713759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.713775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.727928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.727972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.727989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.739088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.739121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.739153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.753821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.753851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.753884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.769629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.769670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.769688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.781242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.781295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.337  [2024-12-09 04:15:51.781313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.337  [2024-12-09 04:15:51.794477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.337  [2024-12-09 04:15:51.794523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.807424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.807456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.807473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.818947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.818975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.819004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.834705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.834748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.834764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.849728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.849758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.849789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.859965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.859992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.860008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.876047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.876076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.876107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.888536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.888568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.888586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.338  [2024-12-09 04:15:51.899376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.338  [2024-12-09 04:15:51.899419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.338  [2024-12-09 04:15:51.899437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:51.920317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:51.920361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:51.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:51.934064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:51.934095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:51.934112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:51.948234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:51.948266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:51.948292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:51.959334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:51.959362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:51.959393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:51.973967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:51.973994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:51.974031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:51.986576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:51.986622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:51.986639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:51.997490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:51.997518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:51.997549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.012200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.012245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:52.012263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.028596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.028643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:52.028660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.043008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.043040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:52.043057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.054732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.054759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:52.054790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.070032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.070061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:52.070093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.083652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.083682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:52.083713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.099132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.099166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.596  [2024-12-09 04:15:52.099197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.596  [2024-12-09 04:15:52.113691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.596  [2024-12-09 04:15:52.113719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.597  [2024-12-09 04:15:52.113749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.597  [2024-12-09 04:15:52.126278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.597  [2024-12-09 04:15:52.126325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.597  [2024-12-09 04:15:52.126340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.597  [2024-12-09 04:15:52.139301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.597  [2024-12-09 04:15:52.139333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.597  [2024-12-09 04:15:52.139350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.597  [2024-12-09 04:15:52.151052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.597  [2024-12-09 04:15:52.151096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.597  [2024-12-09 04:15:52.151112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.597  [2024-12-09 04:15:52.164027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.597  [2024-12-09 04:15:52.164054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.597  [2024-12-09 04:15:52.164085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.180646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.180675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.180690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.194233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.194265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.194291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.205412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.205442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.205458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.220206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.220236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.220268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.232393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.232423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.245479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.245508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.245540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.258819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.258847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.258863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.271295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.271326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.271343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.285747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.285775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.285806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.299263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.299451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.299470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.310820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.310848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.310880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.323534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.323577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.323598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.337980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.338023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.338039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.352133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.352162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.352193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.362843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.362870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.362900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.379057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.379089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.855  [2024-12-09 04:15:52.379107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.855  [2024-12-09 04:15:52.394895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.855  [2024-12-09 04:15:52.394923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.856  [2024-12-09 04:15:52.394953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.856  [2024-12-09 04:15:52.410866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.856  [2024-12-09 04:15:52.410897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.856  [2024-12-09 04:15:52.410915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:23.856  [2024-12-09 04:15:52.423746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:23.856  [2024-12-09 04:15:52.423777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:23.856  [2024-12-09 04:15:52.423794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.114  [2024-12-09 04:15:52.435704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.114  [2024-12-09 04:15:52.435732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.114  [2024-12-09 04:15:52.435763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.114  [2024-12-09 04:15:52.451344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.114  [2024-12-09 04:15:52.451379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.114  [2024-12-09 04:15:52.451411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.114  [2024-12-09 04:15:52.465201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.114  [2024-12-09 04:15:52.465229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.114  [2024-12-09 04:15:52.465261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.114  [2024-12-09 04:15:52.478601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.114  [2024-12-09 04:15:52.478633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.114  [2024-12-09 04:15:52.478651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.494522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.494555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.494572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.509741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.509785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.509802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.521602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.521664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.535946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.535975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.536005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.551635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.551666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.551683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.561920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.561948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.561979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115      18619.00 IOPS,    72.73 MiB/s
[2024-12-09T03:15:52.691Z] [2024-12-09 04:15:52.576153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.576182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.592144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.592176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.592193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.605546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.605576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.605595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.619439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.619470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.619488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.631016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.631046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.631062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.643704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.643735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.643751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.658205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.658235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.672629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.672660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.672678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.115  [2024-12-09 04:15:52.683961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.115  [2024-12-09 04:15:52.684001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.115  [2024-12-09 04:15:52.684019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.699043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.699077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.699095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.713707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.713738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.713756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.725693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.725725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.725757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.740891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.740921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.740937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.754684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.754715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.754732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.767828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.767860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.767878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.779749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.779778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.779794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.792597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.792629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.792647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.807854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.807885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.807902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.818910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.818939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.832988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.833020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.833053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.847630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.847660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.847676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.862279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.862311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.862328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.873887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.873919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.873953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.888574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.888605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.888621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.904140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.904173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.904191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.919632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.919665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.919691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.934083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.934115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.934134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.374  [2024-12-09 04:15:52.945401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.374  [2024-12-09 04:15:52.945433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.374  [2024-12-09 04:15:52.945450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:52.960583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:52.960614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:52.960631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:52.975024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:52.975056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:52.975074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:52.986794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:52.986826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:52.986843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.001624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.001656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.001688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.016576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.016608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.016626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.028450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.028481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.028498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.043428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.043468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.043487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.056566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.056613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.056630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.070694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.070725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.070757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.083220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.083265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.083294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.098080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.098112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.098130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.112053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.112081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.112097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.126125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.126157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.126174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.139736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.139768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.139785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.151090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.151121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.151138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.164761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.164791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.164808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.177303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.177334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.177351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.189672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.189716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.189732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.633  [2024-12-09 04:15:53.202955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.633  [2024-12-09 04:15:53.202987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.633  [2024-12-09 04:15:53.203022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.218848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.218879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.218895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.233803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.233849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.233866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.244899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.244928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.244944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.260822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.260853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.260869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.273564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.273595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.273619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.286683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.286716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.286734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.299002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.299031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.299063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.310431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.310463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.310481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.323642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.323674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.323691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.336201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.336245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.336262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.349998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.350046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.363563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.363594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.363611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.377675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.377707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.377724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.388811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.388855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.388874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.404772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.404803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.404820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.417908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.417940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.417958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.431625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.431656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.893  [2024-12-09 04:15:53.431674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.893  [2024-12-09 04:15:53.445801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.893  [2024-12-09 04:15:53.445833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.894  [2024-12-09 04:15:53.445851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:24.894  [2024-12-09 04:15:53.457185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:24.894  [2024-12-09 04:15:53.457214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:24.894  [2024-12-09 04:15:53.457229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  [2024-12-09 04:15:53.472526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.472559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.472577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  [2024-12-09 04:15:53.487562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.487593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.487610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  [2024-12-09 04:15:53.502340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.502372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.502397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  [2024-12-09 04:15:53.517973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.518005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.518023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  [2024-12-09 04:15:53.528953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.528985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.529003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  [2024-12-09 04:15:53.542851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.542881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.542896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  [2024-12-09 04:15:53.555905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.555935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.555952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152      18728.00 IOPS,    73.16 MiB/s
[2024-12-09T03:15:53.728Z] [2024-12-09 04:15:53.569024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420)
00:25:25.152  [2024-12-09 04:15:53.569067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:25.152  [2024-12-09 04:15:53.569084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:25:25.152  
00:25:25.152                                                                                                  Latency(us)
00:25:25.152  
[2024-12-09T03:15:53.728Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:25.152  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:25:25.152  	 nvme0n1             :       2.01   18741.92      73.21       0.00     0.00    6821.02    3276.80   23301.69
00:25:25.152  
[2024-12-09T03:15:53.728Z]  ===================================================================================================================
00:25:25.152  
[2024-12-09T03:15:53.728Z]  Total                       :              18741.92      73.21       0.00     0.00    6821.02    3276.80   23301.69
00:25:25.152  {
00:25:25.152    "results": [
00:25:25.152      {
00:25:25.152        "job": "nvme0n1",
00:25:25.152        "core_mask": "0x2",
00:25:25.152        "workload": "randread",
00:25:25.152        "status": "finished",
00:25:25.152        "queue_depth": 128,
00:25:25.152        "io_size": 4096,
00:25:25.152        "runtime": 2.005931,
00:25:25.152        "iops": 18741.92083376746,
00:25:25.152        "mibps": 73.21062825690414,
00:25:25.152        "io_failed": 0,
00:25:25.152        "io_timeout": 0,
00:25:25.152        "avg_latency_us": 6821.022054114761,
00:25:25.152        "min_latency_us": 3276.8,
00:25:25.152        "max_latency_us": 23301.68888888889
00:25:25.152      }
00:25:25.152    ],
00:25:25.152    "core_count": 1
00:25:25.152  }
00:25:25.152    04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:25:25.152    04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:25:25.152    04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:25:25.152  			| .driver_specific
00:25:25.152  			| .nvme_error
00:25:25.152  			| .status_code
00:25:25.152  			| .command_transient_transport_error'
00:25:25.152    04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 ))
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 337238
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 337238 ']'
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 337238
00:25:25.410    04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:25.410    04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337238
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337238'
00:25:25.410  killing process with pid 337238
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 337238
00:25:25.410  Received shutdown signal, test time was about 2.000000 seconds
00:25:25.410  
00:25:25.410                                                                                                  Latency(us)
00:25:25.410  
[2024-12-09T03:15:53.986Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:25.410  
[2024-12-09T03:15:53.986Z]  ===================================================================================================================
00:25:25.410  
[2024-12-09T03:15:53.986Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:25.410   04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 337238
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=337707
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 337707 /var/tmp/bperf.sock
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 337707 ']'
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:25.668  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:25.668   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:25.668  [2024-12-09 04:15:54.173419] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:25.668  [2024-12-09 04:15:54.173512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337707 ]
00:25:25.668  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:25.668  Zero copy mechanism will not be used.
00:25:25.668  [2024-12-09 04:15:54.240724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:25.926  [2024-12-09 04:15:54.297388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:25.926   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:25.926   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:25:25.926   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:25.926   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:26.184   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:25:26.184   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:26.184   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:26.184   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:26.184   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:26.184   04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:26.748  nvme0n1
00:25:26.748   04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:25:26.748   04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:26.748   04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:26.748   04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:26.748   04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:25:26.748   04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:26.748  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:26.748  Zero copy mechanism will not be used.
00:25:26.748  Running I/O for 2 seconds...
00:25:26.748  [2024-12-09 04:15:55.279767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.748  [2024-12-09 04:15:55.279825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.748  [2024-12-09 04:15:55.279846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:26.748  [2024-12-09 04:15:55.285519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.748  [2024-12-09 04:15:55.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.748  [2024-12-09 04:15:55.285575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:26.748  [2024-12-09 04:15:55.291995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.748  [2024-12-09 04:15:55.292044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.748  [2024-12-09 04:15:55.292063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:26.748  [2024-12-09 04:15:55.296101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.748  [2024-12-09 04:15:55.296134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.748  [2024-12-09 04:15:55.296153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:26.748  [2024-12-09 04:15:55.303522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.748  [2024-12-09 04:15:55.303555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.748  [2024-12-09 04:15:55.303573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:26.748  [2024-12-09 04:15:55.310793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.748  [2024-12-09 04:15:55.310827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.748  [2024-12-09 04:15:55.310844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:26.748  [2024-12-09 04:15:55.316415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.748  [2024-12-09 04:15:55.316448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.748  [2024-12-09 04:15:55.316466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:26.748  [2024-12-09 04:15:55.321903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:26.749  [2024-12-09 04:15:55.321950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:26.749  [2024-12-09 04:15:55.321968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.327421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.327487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.332349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.332381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.332399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.337075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.337106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.337138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.341927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.341959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.341992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.346672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.346703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.346735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.351390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.351425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.351442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.356081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.356132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.356149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.360793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.360839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.360855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.365848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.365880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.365910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.371030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.371077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.375976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.376022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.006  [2024-12-09 04:15:55.376039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.006  [2024-12-09 04:15:55.380678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.006  [2024-12-09 04:15:55.380709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.380733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.386167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.386197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.386214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.391818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.391867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.391885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.397677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.397709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.397726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.404230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.404283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.404302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.409750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.409782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.409813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.415207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.415239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.415279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.420133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.420163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.420181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.424772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.424803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.424834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.429429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.429465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.429484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.433997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.434028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.434045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.438749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.438795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.438812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.443411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.443442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.443459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.448232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.448285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.448304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.453658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.453703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.453719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.458502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.458534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.458551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.463207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.463253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.463269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.467827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.467872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.467889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.472316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.472348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.472365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.475397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.475429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.475447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.479030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.479060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.479077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.482418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.482451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.482469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.485480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.485511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.485529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.488866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.488899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.492890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.492921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.492938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.497918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.497969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.503956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.503988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.007  [2024-12-09 04:15:55.504026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.007  [2024-12-09 04:15:55.511719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.007  [2024-12-09 04:15:55.511751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.511784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.517716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.517748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.517766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.520980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.521011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.521029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.525667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.525699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.529553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.529584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.529601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.532921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.532952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.532969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.536550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.536581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.536598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.540202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.540234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.540252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.543142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.543170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.543187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.547701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.547732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.547749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.553091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.553121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.553138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.560075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.560107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.560125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.566827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.566858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.566891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.572417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.572450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.572467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.008  [2024-12-09 04:15:55.577991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.008  [2024-12-09 04:15:55.578021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.008  [2024-12-09 04:15:55.578038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.583419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.583450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.583468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.588006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.588037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.588059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.592684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.592715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.592732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.597306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.597337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.597355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.603242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.603298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.603317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.608159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.608190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.608207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.612862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.612892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.612909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.618545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.618577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.618609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.623800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.266  [2024-12-09 04:15:55.623832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.266  [2024-12-09 04:15:55.623849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.266  [2024-12-09 04:15:55.629159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.629204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.629221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.634796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.634847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.634864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.641754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.641785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.641801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.647063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.647093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.647110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.652358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.652389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.652405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.656804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.656848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.656863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.661551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.661596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.661613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.666635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.666666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.666684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.672869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.672914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.680687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.680718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.680751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.686823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.686869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.686888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.693909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.693955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.693972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.701588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.701621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.701639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.708361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.708394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.708412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.715741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.715774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.715792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.723725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.723772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.723790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.731657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.731690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.731723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.739330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.739363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.739382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.747055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.747088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.747112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.754656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.754706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.760367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.760398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.760416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.765166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.765197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.765215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.770340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.267  [2024-12-09 04:15:55.770371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.267  [2024-12-09 04:15:55.770389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.267  [2024-12-09 04:15:55.775396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.775428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.775446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.779903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.779934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.779951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.784395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.784426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.784444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.788878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.788908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.788926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.793602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.793640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.793658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.798718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.798749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.798766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.803581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.803615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.803649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.808299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.808331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.808349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.812996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.813042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.813060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.817569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.817614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.817630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.822236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.822267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.822309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.826800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.826843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.826860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.831545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.831594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.831611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.268  [2024-12-09 04:15:55.836745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.268  [2024-12-09 04:15:55.836777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.268  [2024-12-09 04:15:55.836814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.526  [2024-12-09 04:15:55.841845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.841878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.841896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.846549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.846580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.846612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.851729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.851759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.851776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.858107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.858154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.858173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.865683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.865715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.865733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.871234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.871266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.871291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.877515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.877547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.877565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.883981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.884020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.891468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.891501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.891520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.899754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.899788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.899806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.905755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.905789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.905808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.911100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.911131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.911148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.916434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.916465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.916483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.921826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.921858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.921876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.927135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.927167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.927184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.932854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.932887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.932906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.938533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.938565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.938582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.944654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.944687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.944706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.951114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.951147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.951165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.954952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.954985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.955003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.959422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.959453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.959470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.963108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.963138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.963154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.967287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.967319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.967336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.971866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.971896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.971928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.976454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.976484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.976511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.981129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.527  [2024-12-09 04:15:55.981174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.527  [2024-12-09 04:15:55.981190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.527  [2024-12-09 04:15:55.985850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:55.985894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:55.985911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:55.990523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:55.990570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:55.990586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:55.995267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:55.995304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:55.995321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:55.999951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:55.999982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.000001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.004518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.004548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.004565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.010119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.010151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.010169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.014875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.014906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.014924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.019669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.019721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.024375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.024405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.024422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.029140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.029170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.029187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.033752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.033783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.033801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.038906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.038938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.038956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.045117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.045149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.045167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.052720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.052752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.052770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.058441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.058473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.058490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.063904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.063935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.063953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.069858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.069890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.069923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.075909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.075957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.075976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.081984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.082030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.082047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.087811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.087844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.087862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.094188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.094221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.094239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.528  [2024-12-09 04:15:56.099960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.528  [2024-12-09 04:15:56.099993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.528  [2024-12-09 04:15:56.100011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.786  [2024-12-09 04:15:56.104540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.786  [2024-12-09 04:15:56.104572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.786  [2024-12-09 04:15:56.104589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.786  [2024-12-09 04:15:56.109076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.786  [2024-12-09 04:15:56.109106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.109122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.113587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.113618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.118263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.118301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.118319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.123723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.123754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.123772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.130963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.131010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.131027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.138486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.138517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.138535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.146073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.146105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.146122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.153676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.153709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.161290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.161334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.161352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.169096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.169130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.169148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.176588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.176621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.176639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.184540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.184574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.184608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.192594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.192627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.192645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.200796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.200830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.200848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.209074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.209107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.209126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.217117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.217149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.217183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.225655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.225688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.225706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.234070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.234104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.234122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.241610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.241644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.241669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.247324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.247357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.247374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.252542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.252589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.252607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.258238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.258270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.258298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.263652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.263683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.263701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.268672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.268704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.268721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.787       5578.00 IOPS,   697.25 MiB/s
[2024-12-09T03:15:56.363Z] [2024-12-09 04:15:56.275734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.275777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.275793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.281145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.787  [2024-12-09 04:15:56.281177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.787  [2024-12-09 04:15:56.281194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.787  [2024-12-09 04:15:56.287614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.287646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.287678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.294743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.294783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.294803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.300649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.300682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.300701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.307033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.307065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.307084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.313488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.313521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.313539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.319568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.319601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.319619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.324962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.324996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.325013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.330348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.330380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.330398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.335445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.335477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.335495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.340604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.340637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.340656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.343483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.343514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.343532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.347252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.347291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.347311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.351782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.351812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.351829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.356465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.356497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.356516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:27.788  [2024-12-09 04:15:56.361476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:27.788  [2024-12-09 04:15:56.361511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:27.788  [2024-12-09 04:15:56.361529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.366220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.366251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.366269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.370935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.370968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.371000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.375850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.375881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.375899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.380498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.380529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.380567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.385307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.385338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.385355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.389908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.389937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.389954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.394995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.395042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.395061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.401442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.401475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.401494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.407734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.407767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.407799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.413793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.413827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.413845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.420749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.420797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.420815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.426472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.426504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.426523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.432516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.432549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.432566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.438594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.438628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.438659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.444290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.444323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.444342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.450218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.450250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.450292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.455872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.455903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.455921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.461473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.461506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.461523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.467425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.467457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.467475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.474553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.474599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.474617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.047  [2024-12-09 04:15:56.480727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.047  [2024-12-09 04:15:56.480759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.047  [2024-12-09 04:15:56.480783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.486829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.486876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.492608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.492641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.492672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.497987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.498035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.498052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.504024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.504056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.504074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.510375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.510407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.510425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.515193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.515225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.515243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.519757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.519787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.519804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.524765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.524796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.524829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.530122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.530159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.530192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.534858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.534903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.534920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.539693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.539759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.544399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.544430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.544448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.549063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.549094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.549110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.553764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.553795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.553813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.558412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.558442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.558460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.563029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.563060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.563077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.568190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.568220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.568237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.573363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.573394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.573411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.578066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.578112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.578129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.582864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.582895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.582913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.587689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.587720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.587737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.592867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.592916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.598579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.598611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.598628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.604113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.604145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.604163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.609519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.609551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.609569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.614491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.614524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.614547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.048  [2024-12-09 04:15:56.617322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.048  [2024-12-09 04:15:56.617354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.048  [2024-12-09 04:15:56.617372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.049  [2024-12-09 04:15:56.621411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.049  [2024-12-09 04:15:56.621442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.049  [2024-12-09 04:15:56.621460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.625617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.625648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.625666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.628737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.628766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.633561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.633607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.633624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.638616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.638646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.638662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.643981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.644014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.644032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.649734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.649766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.649784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.654659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.654693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.654711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.659649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.659679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.659696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.664494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.664525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.307  [2024-12-09 04:15:56.664542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.307  [2024-12-09 04:15:56.669034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.307  [2024-12-09 04:15:56.669064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.669081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.674418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.674448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.674466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.678245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.678297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.678316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.683302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.683333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.683350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.687004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.687034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.687051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.691472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.691509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.691538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.695981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.696009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.696026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.700514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.700543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.700560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.705121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.705150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.705166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.710576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.710622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.710639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.714522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.714553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.714571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.719106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.719137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.719169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.723654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.723699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.723716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.728403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.728434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.728451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.733707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.733741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.733759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.737608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.737637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.737654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.742503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.742535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.742552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.747940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.747971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.747988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.755227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.755279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.755313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.761382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.761414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.761432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.767643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.767673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.767690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.773387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.773418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.773435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.778996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.779026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.779043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.784160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.784191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.784209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.788872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.788902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.788919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.308  [2024-12-09 04:15:56.793530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.308  [2024-12-09 04:15:56.793580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.308  [2024-12-09 04:15:56.793598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.798423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.798455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.798472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.804348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.804380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.804398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.811816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.811847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.811864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.818286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.818332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.823747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.823779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.823797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.829604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.829650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.829674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.835967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.836015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.836039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.840903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.840935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.840970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.846142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.846174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.846209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.850734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.850781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.856413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.856444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.856462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.863578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.863610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.863629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.870605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.870638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.870656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.876140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.876171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.876203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.309  [2024-12-09 04:15:56.881809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.309  [2024-12-09 04:15:56.881841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.309  [2024-12-09 04:15:56.881860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.568  [2024-12-09 04:15:56.886465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.568  [2024-12-09 04:15:56.886496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.568  [2024-12-09 04:15:56.886517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.568  [2024-12-09 04:15:56.891704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.568  [2024-12-09 04:15:56.891736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.568  [2024-12-09 04:15:56.891754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.568  [2024-12-09 04:15:56.896686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.568  [2024-12-09 04:15:56.896718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.568  [2024-12-09 04:15:56.896738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.568  [2024-12-09 04:15:56.901286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.568  [2024-12-09 04:15:56.901315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.568  [2024-12-09 04:15:56.901334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.568  [2024-12-09 04:15:56.905746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.905776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.905800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.910330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.910359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.910383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.914921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.914951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.914984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.919558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.919594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.919619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.924269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.924306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.924346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.928928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.928958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.928976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.933601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.933632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.933649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.938231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.938278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.938297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.942811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.942842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.942874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.947619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.947650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.947688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.952259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.952300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.952327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.956984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.957014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.957032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.961591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.961626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.961645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.966237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.966298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.966318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.970839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.970870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.970889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.975671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.975702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.975721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.980713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.980743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.980762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.985815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.985845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.985868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.991171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.991235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:56.996505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:56.996536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:56.996570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:57.001596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:57.001642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:57.001661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:57.007209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:57.007241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:57.007264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:57.012699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:57.012731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:57.012749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:57.019874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:57.019906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:57.019925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:57.027663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:57.027695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:57.027713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:57.034760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:57.034792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:57.034809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.569  [2024-12-09 04:15:57.039175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.569  [2024-12-09 04:15:57.039206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.569  [2024-12-09 04:15:57.039225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.046340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.046373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.046396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.052947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.052979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.052997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.058031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.058063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.058093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.062694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.062725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.062743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.067400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.067431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.067449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.072228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.072269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.072296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.076994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.077025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.077052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.082034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.082064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.082085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.087021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.087051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.087068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.091648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.091678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.091698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.096252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.096300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.096318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.100790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.100826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.100844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.105339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.105369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.105388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.110197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.110226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.110246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.114926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.114972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.114989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.120265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.120303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.120329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.124972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.125003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.125021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.129535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.129565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.129583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.134136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.134167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.134185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.570  [2024-12-09 04:15:57.138779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.570  [2024-12-09 04:15:57.138814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.570  [2024-12-09 04:15:57.138831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.828  [2024-12-09 04:15:57.143457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.828  [2024-12-09 04:15:57.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.828  [2024-12-09 04:15:57.143507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.828  [2024-12-09 04:15:57.148008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.828  [2024-12-09 04:15:57.148039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.148057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.152682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.152713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.152730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.157355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.157385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.157403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.162037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.162083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.162100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.167037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.167067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.167084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.171880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.171910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.171931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.176599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.176629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.176648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.182249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.182304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.182331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.189137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.189169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.189188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.196279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.196311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.196330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.202896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.202926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.202944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.208644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.208675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.208698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.214941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.214972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.214992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.218884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.218916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.218934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.222333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.222363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.222395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.226912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.226942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.226958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.231496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.231526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.231544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.235933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.235964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.235984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.241177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.241210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.241228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.246067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.246100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.246119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.250815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.250847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.250865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.255311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.255352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.255369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.259979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.260010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.260026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.265375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.265407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.265424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:28.829  [2024-12-09 04:15:57.271339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.271384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.271409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:28.829       5777.50 IOPS,   722.19 MiB/s
[2024-12-09T03:15:57.405Z] [2024-12-09 04:15:57.279869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0)
00:25:28.829  [2024-12-09 04:15:57.279902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:28.829  [2024-12-09 04:15:57.279920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:28.829  
00:25:28.829                                                                                                  Latency(us)
00:25:28.829  
[2024-12-09T03:15:57.405Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:28.829  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:25:28.829  	 nvme0n1             :       2.05    5657.87     707.23       0.00     0.00    2770.03     703.91   46797.56
00:25:28.829  
[2024-12-09T03:15:57.406Z]  ===================================================================================================================
00:25:28.830  
[2024-12-09T03:15:57.406Z]  Total                       :               5657.87     707.23       0.00     0.00    2770.03     703.91   46797.56
00:25:28.830  {
00:25:28.830    "results": [
00:25:28.830      {
00:25:28.830        "job": "nvme0n1",
00:25:28.830        "core_mask": "0x2",
00:25:28.830        "workload": "randread",
00:25:28.830        "status": "finished",
00:25:28.830        "queue_depth": 16,
00:25:28.830        "io_size": 131072,
00:25:28.830        "runtime": 2.045117,
00:25:28.830        "iops": 5657.867007119886,
00:25:28.830        "mibps": 707.2333758899857,
00:25:28.830        "io_failed": 0,
00:25:28.830        "io_timeout": 0,
00:25:28.830        "avg_latency_us": 2770.0314899637347,
00:25:28.830        "min_latency_us": 703.9051851851851,
00:25:28.830        "max_latency_us": 46797.55851851852
00:25:28.830      }
00:25:28.830    ],
00:25:28.830    "core_count": 1
00:25:28.830  }
00:25:28.830    04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:25:28.830    04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:25:28.830    04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:25:28.830  			| .driver_specific
00:25:28.830  			| .nvme_error
00:25:28.830  			| .status_code
00:25:28.830  			| .command_transient_transport_error'
00:25:28.830    04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:25:29.088   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 374 > 0 ))
00:25:29.088   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 337707
00:25:29.088   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 337707 ']'
00:25:29.088   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 337707
00:25:29.088    04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:25:29.088   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:29.088    04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337707
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337707'
00:25:29.345  killing process with pid 337707
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 337707
00:25:29.345  Received shutdown signal, test time was about 2.000000 seconds
00:25:29.345  
00:25:29.345                                                                                                  Latency(us)
00:25:29.345  
[2024-12-09T03:15:57.921Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:29.345  
[2024-12-09T03:15:57.921Z]  ===================================================================================================================
00:25:29.345  
[2024-12-09T03:15:57.921Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 337707
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=338184
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 338184 /var/tmp/bperf.sock
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 338184 ']'
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:29.345  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:29.345   04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:29.603  [2024-12-09 04:15:57.937938] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:29.603  [2024-12-09 04:15:57.938026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338184 ]
00:25:29.603  [2024-12-09 04:15:58.004620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:29.603  [2024-12-09 04:15:58.059308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:29.603   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:29.603   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:25:29.603   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:29.603   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:30.166   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:25:30.166   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.166   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:30.166   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.166   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:30.166   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:30.423  nvme0n1
00:25:30.423   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:25:30.423   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.423   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:30.423   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.423   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:25:30.423   04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:30.680  Running I/O for 2 seconds...
00:25:30.680  [2024-12-09 04:15:59.068289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eefae0
00:25:30.680  [2024-12-09 04:15:59.069707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.069750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.082815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee2c28
00:25:30.680  [2024-12-09 04:15:59.084731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.084763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.091191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eeb328
00:25:30.680  [2024-12-09 04:15:59.092310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.092341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.105752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eff3c8
00:25:30.680  [2024-12-09 04:15:59.107392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.107425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.114284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.680  [2024-12-09 04:15:59.115083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.115116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.126413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee8088
00:25:30.680  [2024-12-09 04:15:59.127206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.127237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.140888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eefae0
00:25:30.680  [2024-12-09 04:15:59.142488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.142537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.152166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef0bc0
00:25:30.680  [2024-12-09 04:15:59.153523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.153554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.164492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016edece0
00:25:30.680  [2024-12-09 04:15:59.166098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.166147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.175337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016efd640
00:25:30.680  [2024-12-09 04:15:59.177206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.177236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.187795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee0ea0
00:25:30.680  [2024-12-09 04:15:59.188851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.188882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.200062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eeee38
00:25:30.680  [2024-12-09 04:15:59.201541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.211964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee1b48
00:25:30.680  [2024-12-09 04:15:59.213108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.213141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.222673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee5658
00:25:30.680  [2024-12-09 04:15:59.224034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.224064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.235516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.680  [2024-12-09 04:15:59.235868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.235900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.680  [2024-12-09 04:15:59.249807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.680  [2024-12-09 04:15:59.250136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.680  [2024-12-09 04:15:59.250165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.937  [2024-12-09 04:15:59.263430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.937  [2024-12-09 04:15:59.263683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.937  [2024-12-09 04:15:59.263730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.937  [2024-12-09 04:15:59.277646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.937  [2024-12-09 04:15:59.277909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.937  [2024-12-09 04:15:59.277938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.937  [2024-12-09 04:15:59.291967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.937  [2024-12-09 04:15:59.292239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.937  [2024-12-09 04:15:59.292293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.937  [2024-12-09 04:15:59.306263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.937  [2024-12-09 04:15:59.306523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.937  [2024-12-09 04:15:59.306556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.937  [2024-12-09 04:15:59.320547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.937  [2024-12-09 04:15:59.320895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.937  [2024-12-09 04:15:59.320941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.937  [2024-12-09 04:15:59.334550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.937  [2024-12-09 04:15:59.334888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.937  [2024-12-09 04:15:59.334935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.937  [2024-12-09 04:15:59.348838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.937  [2024-12-09 04:15:59.349095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.937  [2024-12-09 04:15:59.349141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.363090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.363393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.363442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.377404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.377668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.377716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.391620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.391881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.391929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.405713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.406052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.406083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.419864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.420198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.420245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.434065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.434398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.434449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.448265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.448620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.448650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.462393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.462652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.462704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.476502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.476809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.476841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.490725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.490992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.491042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:30.938  [2024-12-09 04:15:59.504926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:30.938  [2024-12-09 04:15:59.505209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:30.938  [2024-12-09 04:15:59.505257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.518511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.518763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.518795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.532030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.532285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.532332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.546165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.546438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.546487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.560383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.560648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.560694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.574557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.574893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.574939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.588756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.589015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.589060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.603038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.603389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.603420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.617320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.617642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.617673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.631570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.631907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.631939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.645766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.646041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.646087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.659926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.660238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.660268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.674024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.674348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.674379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.688228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.688503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.688550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.702499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.702785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.194  [2024-12-09 04:15:59.702830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.194  [2024-12-09 04:15:59.716628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.194  [2024-12-09 04:15:59.716996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.195  [2024-12-09 04:15:59.717025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.195  [2024-12-09 04:15:59.730884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.195  [2024-12-09 04:15:59.731233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.195  [2024-12-09 04:15:59.731265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.195  [2024-12-09 04:15:59.745041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.195  [2024-12-09 04:15:59.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.195  [2024-12-09 04:15:59.745354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.195  [2024-12-09 04:15:59.759205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.195  [2024-12-09 04:15:59.759580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.195  [2024-12-09 04:15:59.759611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.772981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.773313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.773341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.786855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.787115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.787161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.800936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.801204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.801247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.815046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.815335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.815379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.829172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.829423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.829467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.843123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.843404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.843443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.857108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.857467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.857501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.871352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.871621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.871668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.885431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.885770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.885800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.899704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.899989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.900035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.913937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.914221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.914267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.928016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.450  [2024-12-09 04:15:59.928360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.450  [2024-12-09 04:15:59.928389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.450  [2024-12-09 04:15:59.942244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.451  [2024-12-09 04:15:59.942572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.451  [2024-12-09 04:15:59.942619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.451  [2024-12-09 04:15:59.956437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.451  [2024-12-09 04:15:59.956695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.451  [2024-12-09 04:15:59.956743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.451  [2024-12-09 04:15:59.970627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.451  [2024-12-09 04:15:59.970910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.451  [2024-12-09 04:15:59.970956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.451  [2024-12-09 04:15:59.984812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.451  [2024-12-09 04:15:59.985114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.451  [2024-12-09 04:15:59.985166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.451  [2024-12-09 04:15:59.998928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.451  [2024-12-09 04:15:59.999213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.451  [2024-12-09 04:15:59.999260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.451  [2024-12-09 04:16:00.012948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.451  [2024-12-09 04:16:00.013190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.451  [2024-12-09 04:16:00.013230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.026548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.026755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.026786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.040092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.040314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.040344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708      18549.00 IOPS,    72.46 MiB/s
[2024-12-09T03:16:00.284Z] [2024-12-09 04:16:00.054411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.054857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.054886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.069518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.069744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.069789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.083205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.083452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.083483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.097179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.097436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.097480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.111277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.111537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.111566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.125451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.125682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.125724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.139232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.139461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.139491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.153056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.153330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.166888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.167132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.167161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.180391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.180623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.180657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.193848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.194089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.194117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.207731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.207953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.208001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.221559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.221816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.221850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.235405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.235685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.249618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.249849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.249877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.263481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.263709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.263752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.708  [2024-12-09 04:16:00.277544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.708  [2024-12-09 04:16:00.277808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.708  [2024-12-09 04:16:00.277836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.290705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.290899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.290928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.303934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.304171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.304198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.317873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.318099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.318142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.331799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.332024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.332070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.345650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.345892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.345925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.359561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.359780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.359822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.373366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.373593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.373620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.387201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.387473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.387503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.400861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.401089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.401133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.414778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.415004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.415045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.428735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.428952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.428981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.442545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.442785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.442826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.456335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.456536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.456565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.470139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.470393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.470422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.483822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.484023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.484052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.497594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.497830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.497858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.511433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.511648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.511675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.525209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.525422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.525451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:31.966  [2024-12-09 04:16:00.538938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:31.966  [2024-12-09 04:16:00.539174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:31.966  [2024-12-09 04:16:00.539201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.552235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.552473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.552501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.565735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.565936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.565965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.579605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.579827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.579856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.593522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.593763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.593790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.607637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.607861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.607902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.621551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.621814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.621841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.635705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.635941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.635968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.649788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.650042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.663917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.664137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.664178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.678140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.678388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.224  [2024-12-09 04:16:00.678430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.224  [2024-12-09 04:16:00.692399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.224  [2024-12-09 04:16:00.692622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.692650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.225  [2024-12-09 04:16:00.706235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.225  [2024-12-09 04:16:00.706503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.706536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.225  [2024-12-09 04:16:00.720396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.225  [2024-12-09 04:16:00.720625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.720651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.225  [2024-12-09 04:16:00.734458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.225  [2024-12-09 04:16:00.734699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.734740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.225  [2024-12-09 04:16:00.748673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.225  [2024-12-09 04:16:00.748893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.748920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.225  [2024-12-09 04:16:00.762690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.225  [2024-12-09 04:16:00.762908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.762949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.225  [2024-12-09 04:16:00.776783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.225  [2024-12-09 04:16:00.777002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.777048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.225  [2024-12-09 04:16:00.790912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.225  [2024-12-09 04:16:00.791122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.225  [2024-12-09 04:16:00.791149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.482  [2024-12-09 04:16:00.804486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.482  [2024-12-09 04:16:00.804707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.482  [2024-12-09 04:16:00.804753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.818609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.818836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.818863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.832779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.833007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.833054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.847004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.847228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.847277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.861204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.861454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.861502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.874904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.875134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.875173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.888868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.889082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.889110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.902812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.903042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.903082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.916934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.917166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.917211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.930931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.931177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.945106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.945360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.959197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.959451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.959496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.973381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.973602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.973644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:00.987697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:00.987921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:00.987949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:01.001840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:01.002062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:01.002106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:01.015911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:01.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:01.016194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:01.030074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:01.030304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:01.030333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483  [2024-12-09 04:16:01.044243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.483  [2024-12-09 04:16:01.044480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.483  [2024-12-09 04:16:01.044508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.483      18438.50 IOPS,    72.03 MiB/s
[2024-12-09T03:16:01.059Z] [2024-12-09 04:16:01.058214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8
00:25:32.741  [2024-12-09 04:16:01.058420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:32.741  [2024-12-09 04:16:01.058449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:25:32.741  
00:25:32.741                                                                                                  Latency(us)
00:25:32.741  
[2024-12-09T03:16:01.317Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:32.741  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:25:32.741  	 nvme0n1             :       2.01   18442.83      72.04       0.00     0.00    6924.60    2791.35   14951.92
00:25:32.741  
[2024-12-09T03:16:01.317Z]  ===================================================================================================================
00:25:32.741  
[2024-12-09T03:16:01.317Z]  Total                       :              18442.83      72.04       0.00     0.00    6924.60    2791.35   14951.92
00:25:32.741  {
00:25:32.741    "results": [
00:25:32.741      {
00:25:32.741        "job": "nvme0n1",
00:25:32.741        "core_mask": "0x2",
00:25:32.741        "workload": "randwrite",
00:25:32.741        "status": "finished",
00:25:32.741        "queue_depth": 128,
00:25:32.741        "io_size": 4096,
00:25:32.741        "runtime": 2.006471,
00:25:32.741        "iops": 18442.828229264214,
00:25:32.741        "mibps": 72.04229777056334,
00:25:32.741        "io_failed": 0,
00:25:32.741        "io_timeout": 0,
00:25:32.741        "avg_latency_us": 6924.598427880117,
00:25:32.741        "min_latency_us": 2791.348148148148,
00:25:32.741        "max_latency_us": 14951.917037037038
00:25:32.741      }
00:25:32.741    ],
00:25:32.741    "core_count": 1
00:25:32.741  }
00:25:32.741    04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:25:32.741    04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:25:32.741    04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:25:32.741    04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:25:32.741  			| .driver_specific
00:25:32.741  			| .nvme_error
00:25:32.741  			| .status_code
00:25:32.741  			| .command_transient_transport_error'
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 ))
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 338184
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 338184 ']'
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 338184
00:25:32.999    04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:32.999    04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338184
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338184'
00:25:32.999  killing process with pid 338184
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 338184
00:25:32.999  Received shutdown signal, test time was about 2.000000 seconds
00:25:32.999  
00:25:32.999                                                                                                  Latency(us)
00:25:32.999  
[2024-12-09T03:16:01.575Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:32.999  
[2024-12-09T03:16:01.575Z]  ===================================================================================================================
00:25:32.999  
[2024-12-09T03:16:01.575Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:32.999   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 338184
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=338599
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 338599 /var/tmp/bperf.sock
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 338599 ']'
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:25:33.257  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:33.257   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:33.257  [2024-12-09 04:16:01.660861] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:33.257  [2024-12-09 04:16:01.660936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338599 ]
00:25:33.257  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:33.257  Zero copy mechanism will not be used.
00:25:33.257  [2024-12-09 04:16:01.726668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:33.257  [2024-12-09 04:16:01.781598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:33.515   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:33.515   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:25:33.515   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:33.515   04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:25:33.773   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:25:33.773   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:33.773   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:33.773   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:33.773   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:33.773   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:25:34.031  nvme0n1
00:25:34.031   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:25:34.031   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:34.031   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:34.031   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:34.031   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:25:34.031   04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:25:34.290  I/O size of 131072 is greater than zero copy threshold (65536).
00:25:34.290  Zero copy mechanism will not be used.
00:25:34.290  Running I/O for 2 seconds...
00:25:34.290  [2024-12-09 04:16:02.645498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.645617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.645669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.653145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.653267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.653307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.660763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.660909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.660955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.667470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.667616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.667644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.674215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.674351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.674381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.680182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.680296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.686060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.686203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.686247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.692114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.692221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.692251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.697964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.698103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.698133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.704240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.704400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.704429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.711031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.711153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.711183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.717524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.717664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.717694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.723626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.723830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.723859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.729961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.730127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.730156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.736284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.736446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.736476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.742721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.742906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.742949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.749798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.749895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.749950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.757522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.290  [2024-12-09 04:16:02.757698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.290  [2024-12-09 04:16:02.757726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.290  [2024-12-09 04:16:02.764829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.764945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.764976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.771307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.771419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.771448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.777429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.777574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.777604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.783481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.783634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.783681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.789619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.789812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.789841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.796416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.796625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.803113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.803348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.803378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.810593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.810721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.810751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.817874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.817963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.824444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.824569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.824598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.830383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.830503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.830533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.836416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.836537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.836565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.842328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.842475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.842505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.849102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.849326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.849356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.856564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.856707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.856737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.291  [2024-12-09 04:16:02.862594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.291  [2024-12-09 04:16:02.862714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.291  [2024-12-09 04:16:02.862744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.868597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.868751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.868781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.874622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.874764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.874793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.880810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.880936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.880981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.887781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.887990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.888018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.895106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.895263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.895326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.902232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.902375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.902423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.909347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.909439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.909468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.916418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.916510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.916539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.923317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.923428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.923469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.930253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.930592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.930619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.937893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.938046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.945412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.945598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.945641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.952247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.952410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.952439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.958166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.958293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.958322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.964081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.964215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.964243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.970131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.970242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.970297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.976042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.976186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.976214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.982198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.982401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.982432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.988628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.988803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.550  [2024-12-09 04:16:02.988833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.550  [2024-12-09 04:16:02.995077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.550  [2024-12-09 04:16:02.995217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:02.995245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.001460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.001637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.001665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.007965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.008136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.008164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.014443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.014639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.020821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.021004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.021046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.027386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.027512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.027541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.033747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.033918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.033945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.040200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.040314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.040351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.046027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.046202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.046231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.052348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.052489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.052518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.059018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.059142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.059171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.065296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.065404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.071150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.071328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.071359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.077070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.077184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.077213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.083781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.083858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.083886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.090224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.090321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.090369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.096184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.096319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.096348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.101975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.102080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.102106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.108057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.108148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.108176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.114327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.114427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.114461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.551  [2024-12-09 04:16:03.120116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.551  [2024-12-09 04:16:03.120216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.551  [2024-12-09 04:16:03.120245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.126018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.126125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.126177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.132457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.132549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.132578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.138863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.138940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.138967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.145286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.145377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.145405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.151429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.151690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.151719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.157537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.157908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.157937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.163915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.164238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.164294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.169569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.169905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.169934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.175122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.175452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.175482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.180789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.181123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.181157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.186439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.186757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.186785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.192096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.192438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.192469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.197846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.198159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.198187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.203419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.203762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.203791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.209124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.209465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.810  [2024-12-09 04:16:03.209495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.810  [2024-12-09 04:16:03.214722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.810  [2024-12-09 04:16:03.215018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.215046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.220377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.220741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.220771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.226831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.227118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.227146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.232975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.233327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.233357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.239285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.239633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.239676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.245464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.245784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.245820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.251779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.252082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.252124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.258043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.258373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.258403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.264225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.264554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.264603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.270025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.270393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.270422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.275932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.276291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.276334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.282373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.282698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.282727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.288491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.288795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.288823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.294626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.294946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.294989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.300841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.301128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.301178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.306917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.307214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.307243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.313089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.313406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.313435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.319284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.319594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.319623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.325429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.325753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.325781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.331493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.331814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.331842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.337748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.338053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.338081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.343747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.344067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.344095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.350011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.350317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.350346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.356638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.357031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.357058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.363774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.364169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.364198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.370236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.370549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.370593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.376257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.376581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.376624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:34.811  [2024-12-09 04:16:03.382446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:34.811  [2024-12-09 04:16:03.382840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:34.811  [2024-12-09 04:16:03.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.388541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.388830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.388874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.394572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.394891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.394920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.400236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.400559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.400590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.406438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.406845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.413406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.413770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.413800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.420470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.420804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.420838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.427615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.427972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.428001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.435443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.435768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.435796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.442889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.443300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.443346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.448862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.449158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.449202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.455003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.070  [2024-12-09 04:16:03.455324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.070  [2024-12-09 04:16:03.455368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.070  [2024-12-09 04:16:03.461295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.461604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.461633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.467792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.468103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.468154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.474285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.474685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.474713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.480694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.481076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.481103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.487009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.487377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.487407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.493464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.493792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.493820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.499550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.499942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.499970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.506122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.506492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.506521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.512862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.513146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.513175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.519773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.520142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.520170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.526522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.526794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.526826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.533603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.533918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.533945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.540590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.540851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.540896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.547464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.547771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.547799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.554226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.554564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.554608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.561034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.561415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.561445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.568194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.568552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.568596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.575116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.575425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.575455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.581957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.582224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.582282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.588728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.588989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.589017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.595553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.595900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.595928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.602445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.602711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.602739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.071  [2024-12-09 04:16:03.609104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.071  [2024-12-09 04:16:03.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.071  [2024-12-09 04:16:03.609527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.072  [2024-12-09 04:16:03.616105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.072  [2024-12-09 04:16:03.616482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.072  [2024-12-09 04:16:03.616512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.072  [2024-12-09 04:16:03.623222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.072  [2024-12-09 04:16:03.623495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.072  [2024-12-09 04:16:03.623525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.072  [2024-12-09 04:16:03.630137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.072  [2024-12-09 04:16:03.630484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.072  [2024-12-09 04:16:03.630513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.072  [2024-12-09 04:16:03.637087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.072  [2024-12-09 04:16:03.637463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.072  [2024-12-09 04:16:03.637493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.330       4785.00 IOPS,   598.12 MiB/s
[2024-12-09T03:16:03.906Z] [2024-12-09 04:16:03.645424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.645778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.330  [2024-12-09 04:16:03.645808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.330  [2024-12-09 04:16:03.650748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.650983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.330  [2024-12-09 04:16:03.651010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.330  [2024-12-09 04:16:03.655868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.656111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.330  [2024-12-09 04:16:03.656140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.330  [2024-12-09 04:16:03.660946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.661180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.330  [2024-12-09 04:16:03.661208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.330  [2024-12-09 04:16:03.666036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.666309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.330  [2024-12-09 04:16:03.666338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.330  [2024-12-09 04:16:03.671203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.671490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.330  [2024-12-09 04:16:03.671524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.330  [2024-12-09 04:16:03.676708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.676951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.330  [2024-12-09 04:16:03.676984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.330  [2024-12-09 04:16:03.682457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.330  [2024-12-09 04:16:03.682723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.682752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.688212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.688461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.688490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.693869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.694103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.694136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.699781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.700012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.700041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.705476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.705698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.705727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.711330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.711564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.711594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.716952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.717167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.717195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.722418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.722622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.722655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.727897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.728098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.728126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.733482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.733688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.733738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.739230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.739533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.739587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.745406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.745664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.745691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.751907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.752092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.752119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.758131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.758472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.758501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.764682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.764936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.764964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.771429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.771726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.771754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.776748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.777018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.777046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.782344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.782643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.782676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.787957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.788198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.788227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.793601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.793937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.793967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.799172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.799510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.799544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.804809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.805099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.805127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.810309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.810574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.810617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.815822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.816163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.816191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.821408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.821805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.827368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.827611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.827638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.331  [2024-12-09 04:16:03.832824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.331  [2024-12-09 04:16:03.833206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.331  [2024-12-09 04:16:03.833233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.838617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.838891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.838922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.844207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.844492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.844520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.849296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.849547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.849593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.854969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.855246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.855301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.860503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.860728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.860755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.866225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.866496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.866525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.871719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.872005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.872032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.877490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.877764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.877792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.882909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.883207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.883235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.888407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.888628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.888663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.894063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.894403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.894432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.332  [2024-12-09 04:16:03.899682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.332  [2024-12-09 04:16:03.899853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.332  [2024-12-09 04:16:03.899881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.905303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.905549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.905578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.910818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.911037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.911067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.916552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.916790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.916819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.922002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.922225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.922258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.927765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.927990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.928017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.933617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.933828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.933857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.938924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.939168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.939196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.944045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.944269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.944308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.949467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.949703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.949731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.956258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.956603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.956632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.961845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.962062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.962092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.967347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.967713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.967742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.973141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.973379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.973409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.978792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.979127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.979156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.984521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.984833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.591  [2024-12-09 04:16:03.984862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.591  [2024-12-09 04:16:03.990058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.591  [2024-12-09 04:16:03.990428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:03.990457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:03.995977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:03.996250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:03.996293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.001584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.001863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.001891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.007096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.007493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.007523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.012893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.013183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.013212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.018510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.018753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.018787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.024980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.025192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.030155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.030397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.030425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.035544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.035853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.035889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.041120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.041468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.041496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.046728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.046946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.046974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.052315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.052532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.052560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.058040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.058376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.058405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.063744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.063940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.063968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.068816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.069036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.069065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.073912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.074124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.074153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.079026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.079288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.079317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.084115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.084372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.084402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.089136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.089387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.089418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.094186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.094414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.592  [2024-12-09 04:16:04.094444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.592  [2024-12-09 04:16:04.099982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.592  [2024-12-09 04:16:04.100202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.100229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.105625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.105839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.105868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.111013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.111240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.111269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.116505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.116748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.116777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.122092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.122294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.122323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.127574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.127781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.127809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.133213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.133428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.133457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.139481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.139751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.139779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.146120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.151315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.151518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.151553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.156413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.156631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.156659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.593  [2024-12-09 04:16:04.161510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.593  [2024-12-09 04:16:04.161732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.593  [2024-12-09 04:16:04.161761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.166538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.166752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.166801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.171737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.171929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.171963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.176861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.177081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.177122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.182001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.182221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.182250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.187096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.187324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.187353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.192523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.192876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.192909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.198042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.198346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.198376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.203904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.204130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.204158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.210300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.210518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.210548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.215892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.216261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.216298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.221480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.221724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.221752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.226954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.227168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.227202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.852  [2024-12-09 04:16:04.232651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.852  [2024-12-09 04:16:04.232953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.852  [2024-12-09 04:16:04.232986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.238444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.238668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.238697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.243889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.244133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.244162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.250173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.250504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.250543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.255996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.256369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.256398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.261751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.262065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.262096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.267302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.267526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.267554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.272879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.273112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.273141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.278435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.278699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.278728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.284186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.284398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.284427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.289862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.290113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.290141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.295635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.295924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.295953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.301195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.301440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.301470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.306926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.307140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.307184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.312550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.312848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.318117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.318315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.318346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.323724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.324027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.324079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.329220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.329487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.329517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.334718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.334894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.340224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.340541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.340570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.345817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.853  [2024-12-09 04:16:04.346129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.853  [2024-12-09 04:16:04.346156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.853  [2024-12-09 04:16:04.351220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.351475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.351511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.356713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.357014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.357043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.362425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.362766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.362795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.368095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.368367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.368396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.373593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.373814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.373842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.379147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.379350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.379379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.385007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.385170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.385198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.390748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.390968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.390997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.396478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.396707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.396741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.402140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.402359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.402388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.407763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.407904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.407932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.413454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.413636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.413664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.419162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.419368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.419403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:35.854  [2024-12-09 04:16:04.424661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:35.854  [2024-12-09 04:16:04.424886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:35.854  [2024-12-09 04:16:04.424916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.113  [2024-12-09 04:16:04.430106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.113  [2024-12-09 04:16:04.430277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.113  [2024-12-09 04:16:04.430306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.113  [2024-12-09 04:16:04.435807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.113  [2024-12-09 04:16:04.436003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.113  [2024-12-09 04:16:04.436031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.113  [2024-12-09 04:16:04.441157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.113  [2024-12-09 04:16:04.441400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.113  [2024-12-09 04:16:04.441430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.113  [2024-12-09 04:16:04.446632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.113  [2024-12-09 04:16:04.446856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.446889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.452325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.452498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.452527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.458059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.458289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.458318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.463503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.463689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.463719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.468973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.469209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.469251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.474582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.474809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.474838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.480264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.480513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.480542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.486190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.486427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.486457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.491724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.491945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.491974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.497308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.497540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.497569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.502725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.502945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.502974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.508378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.508641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.508670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.514177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.514365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.519925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.520233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.520283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.525360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.525506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.525534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.530712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.530946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.530989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.536300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.536529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.536574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.541813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.542034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.542061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.547380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.547638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.547666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.553132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.553410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.553443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.558785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.559059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.559086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.114  [2024-12-09 04:16:04.564209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.114  [2024-12-09 04:16:04.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.114  [2024-12-09 04:16:04.564454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.569849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.570101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.570129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.575367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.575508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.575537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.580790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.581047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.581075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.586135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.586394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.586423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.591896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.592065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.592093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.597395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.597638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.597666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.603148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.603408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.603437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.608735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.608986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.609014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.614377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.614568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.614620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.619985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.620135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.620162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.625684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.625870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.625897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.631326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.631542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.631586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.636845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.637024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.637052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:25:36.115  [2024-12-09 04:16:04.642264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8
00:25:36.115  [2024-12-09 04:16:04.644033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:36.115  [2024-12-09 04:16:04.644064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:25:36.115       5161.00 IOPS,   645.12 MiB/s
00:25:36.115                                                                                                  Latency(us)
00:25:36.115  
[2024-12-09T03:16:04.691Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:36.115  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:25:36.115  	 nvme0n1             :       2.00    5158.88     644.86       0.00     0.00    3093.23    2366.58   12233.39
00:25:36.115  
[2024-12-09T03:16:04.691Z]  ===================================================================================================================
00:25:36.115  
[2024-12-09T03:16:04.691Z]  Total                       :               5158.88     644.86       0.00     0.00    3093.23    2366.58   12233.39
00:25:36.115  {
00:25:36.115    "results": [
00:25:36.115      {
00:25:36.115        "job": "nvme0n1",
00:25:36.115        "core_mask": "0x2",
00:25:36.115        "workload": "randwrite",
00:25:36.115        "status": "finished",
00:25:36.115        "queue_depth": 16,
00:25:36.115        "io_size": 131072,
00:25:36.115        "runtime": 2.003923,
00:25:36.115        "iops": 5158.88085520252,
00:25:36.115        "mibps": 644.860106900315,
00:25:36.115        "io_failed": 0,
00:25:36.115        "io_timeout": 0,
00:25:36.115        "avg_latency_us": 3093.232355280411,
00:25:36.115        "min_latency_us": 2366.5777777777776,
00:25:36.115        "max_latency_us": 12233.386666666667
00:25:36.115      }
00:25:36.115    ],
00:25:36.115    "core_count": 1
00:25:36.115  }
00:25:36.115    04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:25:36.115    04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:25:36.115    04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:25:36.115  			| .driver_specific
00:25:36.115  			| .nvme_error
00:25:36.115  			| .status_code
00:25:36.115  			| .command_transient_transport_error'
00:25:36.116    04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:25:36.374   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 334 > 0 ))
00:25:36.374   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 338599
00:25:36.374   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 338599 ']'
00:25:36.374   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 338599
00:25:36.374    04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:25:36.374   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:36.374    04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338599
00:25:36.632   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:36.632   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:36.632   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338599'
00:25:36.632  killing process with pid 338599
00:25:36.632   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 338599
00:25:36.632  Received shutdown signal, test time was about 2.000000 seconds
00:25:36.632  
00:25:36.632                                                                                                  Latency(us)
00:25:36.632  
[2024-12-09T03:16:05.208Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:36.632  
[2024-12-09T03:16:05.208Z]  ===================================================================================================================
00:25:36.632  
[2024-12-09T03:16:05.208Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:36.632   04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 338599
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 337208
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 337208 ']'
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 337208
00:25:36.890    04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:36.890    04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337208
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337208'
00:25:36.890  killing process with pid 337208
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 337208
00:25:36.890   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 337208
00:25:37.162  
00:25:37.162  real	0m15.661s
00:25:37.162  user	0m31.332s
00:25:37.162  sys	0m4.378s
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:25:37.162  ************************************
00:25:37.162  END TEST nvmf_digest_error
00:25:37.162  ************************************
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:37.162  rmmod nvme_tcp
00:25:37.162  rmmod nvme_fabrics
00:25:37.162  rmmod nvme_keyring
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 337208 ']'
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 337208
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 337208 ']'
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 337208
00:25:37.162  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (337208) - No such process
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 337208 is not found'
00:25:37.162  Process with pid 337208 is not found
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:37.162   04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:37.162    04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:25:39.250  
00:25:39.250  real	0m35.899s
00:25:39.250  user	1m3.314s
00:25:39.250  sys	0m10.500s
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:25:39.250  ************************************
00:25:39.250  END TEST nvmf_digest
00:25:39.250  ************************************
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]]
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]]
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]]
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:39.250  ************************************
00:25:39.250  START TEST nvmf_bdevperf
00:25:39.250  ************************************
00:25:39.250   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp
00:25:39.250  * Looking for test storage...
00:25:39.250  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:25:39.250    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:39.250     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:25:39.250     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:39.555  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:39.555  		--rc genhtml_branch_coverage=1
00:25:39.555  		--rc genhtml_function_coverage=1
00:25:39.555  		--rc genhtml_legend=1
00:25:39.555  		--rc geninfo_all_blocks=1
00:25:39.555  		--rc geninfo_unexecuted_blocks=1
00:25:39.555  		
00:25:39.555  		'
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:39.555  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:39.555  		--rc genhtml_branch_coverage=1
00:25:39.555  		--rc genhtml_function_coverage=1
00:25:39.555  		--rc genhtml_legend=1
00:25:39.555  		--rc geninfo_all_blocks=1
00:25:39.555  		--rc geninfo_unexecuted_blocks=1
00:25:39.555  		
00:25:39.555  		'
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:39.555  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:39.555  		--rc genhtml_branch_coverage=1
00:25:39.555  		--rc genhtml_function_coverage=1
00:25:39.555  		--rc genhtml_legend=1
00:25:39.555  		--rc geninfo_all_blocks=1
00:25:39.555  		--rc geninfo_unexecuted_blocks=1
00:25:39.555  		
00:25:39.555  		'
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:39.555  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:39.555  		--rc genhtml_branch_coverage=1
00:25:39.555  		--rc genhtml_function_coverage=1
00:25:39.555  		--rc genhtml_legend=1
00:25:39.555  		--rc geninfo_all_blocks=1
00:25:39.555  		--rc geninfo_unexecuted_blocks=1
00:25:39.555  		
00:25:39.555  		'
00:25:39.555   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:39.555    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:39.555     04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:39.555      04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:39.556      04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:39.556      04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:39.556      04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH
00:25:39.556      04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:39.556  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:39.556    04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable
00:25:39.556   04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=()
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=()
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=()
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=()
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=()
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=()
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:25:41.486  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:25:41.486  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:25:41.486  Found net devices under 0000:0a:00.0: cvl_0_0
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:41.486   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:41.487   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:41.487   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:41.487   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:25:41.487  Found net devices under 0000:0a:00.1: cvl_0_1
00:25:41.487   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:41.487   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:25:41.487   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes
00:25:41.487   04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:25:41.487   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:25:41.744  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:41.744  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms
00:25:41.744  
00:25:41.744  --- 10.0.0.2 ping statistics ---
00:25:41.744  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:41.744  rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:25:41.744  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:41.744  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms
00:25:41.744  
00:25:41.744  --- 10.0.0.1 ping statistics ---
00:25:41.744  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:41.744  rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=341089
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 341089
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 341089 ']'
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:41.744  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:41.744   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:41.745  [2024-12-09 04:16:10.271804] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:41.745  [2024-12-09 04:16:10.271903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:42.002  [2024-12-09 04:16:10.347203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:42.002  [2024-12-09 04:16:10.406849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:42.002  [2024-12-09 04:16:10.406914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:42.002  [2024-12-09 04:16:10.406927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:42.002  [2024-12-09 04:16:10.406938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:42.002  [2024-12-09 04:16:10.406948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:42.002  [2024-12-09 04:16:10.408435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:25:42.002  [2024-12-09 04:16:10.408459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:25:42.002  [2024-12-09 04:16:10.408463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:42.002  [2024-12-09 04:16:10.553998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:42.002   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:42.260  Malloc0
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:42.260  [2024-12-09 04:16:10.621425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:42.260   04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1
00:25:42.260    04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json
00:25:42.260    04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=()
00:25:42.260    04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config
00:25:42.260    04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:42.260    04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:42.260  {
00:25:42.260    "params": {
00:25:42.260      "name": "Nvme$subsystem",
00:25:42.260      "trtype": "$TEST_TRANSPORT",
00:25:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:42.260      "adrfam": "ipv4",
00:25:42.260      "trsvcid": "$NVMF_PORT",
00:25:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:42.260      "hdgst": ${hdgst:-false},
00:25:42.260      "ddgst": ${ddgst:-false}
00:25:42.260    },
00:25:42.260    "method": "bdev_nvme_attach_controller"
00:25:42.260  }
00:25:42.260  EOF
00:25:42.260  )")
00:25:42.260     04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat
00:25:42.260    04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq .
00:25:42.260     04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=,
00:25:42.260     04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:25:42.260    "params": {
00:25:42.260      "name": "Nvme1",
00:25:42.260      "trtype": "tcp",
00:25:42.260      "traddr": "10.0.0.2",
00:25:42.260      "adrfam": "ipv4",
00:25:42.260      "trsvcid": "4420",
00:25:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:25:42.260      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:25:42.260      "hdgst": false,
00:25:42.260      "ddgst": false
00:25:42.260    },
00:25:42.260    "method": "bdev_nvme_attach_controller"
00:25:42.260  }'
00:25:42.260  [2024-12-09 04:16:10.671204] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:42.260  [2024-12-09 04:16:10.671330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341123 ]
00:25:42.260  [2024-12-09 04:16:10.739864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:42.260  [2024-12-09 04:16:10.801490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:42.826  Running I/O for 1 seconds...
00:25:43.759       8374.00 IOPS,    32.71 MiB/s
00:25:43.759                                                                                                  Latency(us)
00:25:43.759  
[2024-12-09T03:16:12.335Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:43.759  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:25:43.759  	 Verification LBA range: start 0x0 length 0x4000
00:25:43.759  	 Nvme1n1             :       1.05    8117.89      31.71       0.00     0.00   15113.63    3422.44   43302.31
00:25:43.759  
[2024-12-09T03:16:12.335Z]  ===================================================================================================================
00:25:43.759  
[2024-12-09T03:16:12.335Z]  Total                       :               8117.89      31.71       0.00     0.00   15113.63    3422.44   43302.31
00:25:44.017   04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=341384
00:25:44.017   04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3
00:25:44.017   04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f
00:25:44.017    04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json
00:25:44.017    04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=()
00:25:44.017    04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config
00:25:44.017    04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:44.017    04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:44.017  {
00:25:44.017    "params": {
00:25:44.017      "name": "Nvme$subsystem",
00:25:44.017      "trtype": "$TEST_TRANSPORT",
00:25:44.017      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:44.017      "adrfam": "ipv4",
00:25:44.017      "trsvcid": "$NVMF_PORT",
00:25:44.017      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:44.017      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:44.017      "hdgst": ${hdgst:-false},
00:25:44.017      "ddgst": ${ddgst:-false}
00:25:44.017    },
00:25:44.017    "method": "bdev_nvme_attach_controller"
00:25:44.017  }
00:25:44.017  EOF
00:25:44.017  )")
00:25:44.017     04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat
00:25:44.017    04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq .
00:25:44.017     04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=,
00:25:44.017     04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:25:44.017    "params": {
00:25:44.017      "name": "Nvme1",
00:25:44.017      "trtype": "tcp",
00:25:44.017      "traddr": "10.0.0.2",
00:25:44.017      "adrfam": "ipv4",
00:25:44.017      "trsvcid": "4420",
00:25:44.017      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:25:44.017      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:25:44.017      "hdgst": false,
00:25:44.017      "ddgst": false
00:25:44.017    },
00:25:44.017    "method": "bdev_nvme_attach_controller"
00:25:44.017  }'
00:25:44.017  [2024-12-09 04:16:12.471769] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:44.017  [2024-12-09 04:16:12.471838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341384 ]
00:25:44.018  [2024-12-09 04:16:12.539210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:44.275  [2024-12-09 04:16:12.598428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:44.275  Running I/O for 15 seconds...
00:25:46.582       8059.00 IOPS,    31.48 MiB/s
[2024-12-09T03:16:15.725Z]      8167.00 IOPS,    31.90 MiB/s
[2024-12-09T03:16:15.725Z]  04:16:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 341089
00:25:47.149   04:16:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3
00:25:47.149  [2024-12-09 04:16:15.436000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.149  [2024-12-09 04:16:15.436347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.149  [2024-12-09 04:16:15.436378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.149  [2024-12-09 04:16:15.436410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.149  [2024-12-09 04:16:15.436441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.149  [2024-12-09 04:16:15.436457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.149  [2024-12-09 04:16:15.436472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.436996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.437008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.437032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.437058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.437083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.150  [2024-12-09 04:16:15.437109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.150  [2024-12-09 04:16:15.437584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.150  [2024-12-09 04:16:15.437597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.437986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.437997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.151  [2024-12-09 04:16:15.438483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.151  [2024-12-09 04:16:15.438513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.151  [2024-12-09 04:16:15.438542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.151  [2024-12-09 04:16:15.438601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.151  [2024-12-09 04:16:15.438650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.151  [2024-12-09 04:16:15.438676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.151  [2024-12-09 04:16:15.438689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.151  [2024-12-09 04:16:15.438701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:47.152  [2024-12-09 04:16:15.438727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.438978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.438991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.152  [2024-12-09 04:16:15.439767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.152  [2024-12-09 04:16:15.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.153  [2024-12-09 04:16:15.439792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.153  [2024-12-09 04:16:15.439805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:47.153  [2024-12-09 04:16:15.439816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.153  [2024-12-09 04:16:15.439829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e673a0 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.439843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:25:47.153  [2024-12-09 04:16:15.439853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:25:47.153  [2024-12-09 04:16:15.439863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39296 len:8 PRP1 0x0 PRP2 0x0
00:25:47.153  [2024-12-09 04:16:15.439874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.153  [2024-12-09 04:16:15.439998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:47.153  [2024-12-09 04:16:15.440019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.153  [2024-12-09 04:16:15.440033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:47.153  [2024-12-09 04:16:15.440046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.153  [2024-12-09 04:16:15.440074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:47.153  [2024-12-09 04:16:15.440087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.153  [2024-12-09 04:16:15.440100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:47.153  [2024-12-09 04:16:15.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:47.153  [2024-12-09 04:16:15.440125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.443401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.443443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.444090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.153  [2024-12-09 04:16:15.444120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.153  [2024-12-09 04:16:15.444136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.444392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.444621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.153  [2024-12-09 04:16:15.444655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.153  [2024-12-09 04:16:15.444670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.153  [2024-12-09 04:16:15.444684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.153  [2024-12-09 04:16:15.456797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.457172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.153  [2024-12-09 04:16:15.457216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.153  [2024-12-09 04:16:15.457232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.457505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.457738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.153  [2024-12-09 04:16:15.457757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.153  [2024-12-09 04:16:15.457769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.153  [2024-12-09 04:16:15.457780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.153  [2024-12-09 04:16:15.469929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.470338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.153  [2024-12-09 04:16:15.470366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.153  [2024-12-09 04:16:15.470382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.470610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.470821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.153  [2024-12-09 04:16:15.470839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.153  [2024-12-09 04:16:15.470851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.153  [2024-12-09 04:16:15.470862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.153  [2024-12-09 04:16:15.483156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.483516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.153  [2024-12-09 04:16:15.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.153  [2024-12-09 04:16:15.483560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.483793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.484004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.153  [2024-12-09 04:16:15.484023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.153  [2024-12-09 04:16:15.484035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.153  [2024-12-09 04:16:15.484046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.153  [2024-12-09 04:16:15.496349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.496800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.153  [2024-12-09 04:16:15.496842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.153  [2024-12-09 04:16:15.496858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.497101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.497356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.153  [2024-12-09 04:16:15.497377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.153  [2024-12-09 04:16:15.497391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.153  [2024-12-09 04:16:15.497403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.153  [2024-12-09 04:16:15.509467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.509851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.153  [2024-12-09 04:16:15.509892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.153  [2024-12-09 04:16:15.509907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.510158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.510381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.153  [2024-12-09 04:16:15.510401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.153  [2024-12-09 04:16:15.510413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.153  [2024-12-09 04:16:15.510425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.153  [2024-12-09 04:16:15.522647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.523015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.153  [2024-12-09 04:16:15.523057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.153  [2024-12-09 04:16:15.523072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.153  [2024-12-09 04:16:15.523356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.153  [2024-12-09 04:16:15.523565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.153  [2024-12-09 04:16:15.523584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.153  [2024-12-09 04:16:15.523597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.153  [2024-12-09 04:16:15.523609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.153  [2024-12-09 04:16:15.535745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.153  [2024-12-09 04:16:15.536246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.536296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.536314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.536556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.536784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.536802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.536814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.536825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.548773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.549159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.549201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.549217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.549478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.549710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.549729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.549741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.549752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.561848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.562184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.562212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.562227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.562497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.562713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.562737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.562750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.562761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.575097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.575467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.575510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.575526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.575777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.575987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.576005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.576017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.576029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.588332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.588730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.588771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.588786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.589038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.589249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.589291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.589304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.589331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.601414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.601782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.601823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.601839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.602090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.602326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.602360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.602374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.602391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.614586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.614955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.614998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.615014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.615280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.615503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.615523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.615536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.615548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.627965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.628276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.628318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.628334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.628559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.628781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.628800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.154  [2024-12-09 04:16:15.628812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.154  [2024-12-09 04:16:15.628824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.154  [2024-12-09 04:16:15.641063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.154  [2024-12-09 04:16:15.641497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.154  [2024-12-09 04:16:15.641539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.154  [2024-12-09 04:16:15.641557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.154  [2024-12-09 04:16:15.641799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.154  [2024-12-09 04:16:15.642010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.154  [2024-12-09 04:16:15.642028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.155  [2024-12-09 04:16:15.642039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.155  [2024-12-09 04:16:15.642050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.155  [2024-12-09 04:16:15.654212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.155  [2024-12-09 04:16:15.654607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.155  [2024-12-09 04:16:15.654650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.155  [2024-12-09 04:16:15.654666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.155  [2024-12-09 04:16:15.654928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.155  [2024-12-09 04:16:15.655124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.155  [2024-12-09 04:16:15.655143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.155  [2024-12-09 04:16:15.655156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.155  [2024-12-09 04:16:15.655168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.155  [2024-12-09 04:16:15.667550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.155  [2024-12-09 04:16:15.667894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.155  [2024-12-09 04:16:15.667921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.155  [2024-12-09 04:16:15.667936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.155  [2024-12-09 04:16:15.668157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.155  [2024-12-09 04:16:15.668401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.155  [2024-12-09 04:16:15.668422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.155  [2024-12-09 04:16:15.668434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.155  [2024-12-09 04:16:15.668446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.155  [2024-12-09 04:16:15.681101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.155  [2024-12-09 04:16:15.681530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.155  [2024-12-09 04:16:15.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.155  [2024-12-09 04:16:15.681575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.155  [2024-12-09 04:16:15.681838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.155  [2024-12-09 04:16:15.682057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.155  [2024-12-09 04:16:15.682077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.155  [2024-12-09 04:16:15.682090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.155  [2024-12-09 04:16:15.682103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.155  [2024-12-09 04:16:15.694418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.155  [2024-12-09 04:16:15.694830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.155  [2024-12-09 04:16:15.694857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.155  [2024-12-09 04:16:15.694873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.155  [2024-12-09 04:16:15.695110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.155  [2024-12-09 04:16:15.695355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.155  [2024-12-09 04:16:15.695377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.155  [2024-12-09 04:16:15.695391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.155  [2024-12-09 04:16:15.695404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.155  [2024-12-09 04:16:15.708199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.155  [2024-12-09 04:16:15.708564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.155  [2024-12-09 04:16:15.708617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.155  [2024-12-09 04:16:15.708633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.155  [2024-12-09 04:16:15.708869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.155  [2024-12-09 04:16:15.709064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.155  [2024-12-09 04:16:15.709083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.155  [2024-12-09 04:16:15.709095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.155  [2024-12-09 04:16:15.709106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.722073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.722446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.722475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.722492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.722727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.722946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.722965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.722977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.722989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.735571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.735957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.735986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.736002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.736247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.736480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.736505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.736519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.736531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.748749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.749111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.749138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.749153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.749378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.749596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.749615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.749626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.749638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.761927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.762300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.762348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.762363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.762600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.762812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.762831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.762843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.762854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.775172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.775604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.775630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.775646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.775898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.776094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.776112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.776124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.776140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.788495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.788945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.788987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.789002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.789258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.789489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.789509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.789522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.789534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.801759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.802168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.802233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.802248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.802509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.802743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.802762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.802773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.802785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.414  [2024-12-09 04:16:15.814782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.414  [2024-12-09 04:16:15.815152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.414  [2024-12-09 04:16:15.815195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.414  [2024-12-09 04:16:15.815211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.414  [2024-12-09 04:16:15.815479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.414  [2024-12-09 04:16:15.815712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.414  [2024-12-09 04:16:15.815731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.414  [2024-12-09 04:16:15.815743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.414  [2024-12-09 04:16:15.815754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.827945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.828345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.828373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.828389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.828614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.828844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.828862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.828874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.828886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.841046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.841448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.841475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.841491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.841718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.841935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.841954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.841965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.841976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415       7067.00 IOPS,    27.61 MiB/s
[2024-12-09T03:16:15.991Z] [2024-12-09 04:16:15.854106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.854480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.854508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.854524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.854762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.854973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.854991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.855004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.855014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.867259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.867633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.867662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.867678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.867927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.868138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.868156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.868169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.868180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.880538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.881016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.881057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.881073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.881329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.881544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.881564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.881576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.881588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.893790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.894151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.894191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.894207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.894478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.894694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.894713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.894725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.894736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.906879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.907246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.907280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.907298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.907536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.907748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.907771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.907784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.907795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.920018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.920395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.920439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.920455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.920725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.920921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.920939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.920952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.920963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.933236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.933649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.933691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.933708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.933946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.415  [2024-12-09 04:16:15.934141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.415  [2024-12-09 04:16:15.934159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.415  [2024-12-09 04:16:15.934171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.415  [2024-12-09 04:16:15.934183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.415  [2024-12-09 04:16:15.946496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.415  [2024-12-09 04:16:15.946897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.415  [2024-12-09 04:16:15.946925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.415  [2024-12-09 04:16:15.946942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.415  [2024-12-09 04:16:15.947176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.416  [2024-12-09 04:16:15.947435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.416  [2024-12-09 04:16:15.947457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.416  [2024-12-09 04:16:15.947470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.416  [2024-12-09 04:16:15.947489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.416  [2024-12-09 04:16:15.959796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.416  [2024-12-09 04:16:15.960162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.416  [2024-12-09 04:16:15.960204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.416  [2024-12-09 04:16:15.960220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.416  [2024-12-09 04:16:15.960477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.416  [2024-12-09 04:16:15.960712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.416  [2024-12-09 04:16:15.960730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.416  [2024-12-09 04:16:15.960742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.416  [2024-12-09 04:16:15.960753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.416  [2024-12-09 04:16:15.972968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.416  [2024-12-09 04:16:15.973358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.416  [2024-12-09 04:16:15.973401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.416  [2024-12-09 04:16:15.973416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.416  [2024-12-09 04:16:15.973670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.416  [2024-12-09 04:16:15.973881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.416  [2024-12-09 04:16:15.973899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.416  [2024-12-09 04:16:15.973911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.416  [2024-12-09 04:16:15.973922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.416  [2024-12-09 04:16:15.986461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.416  [2024-12-09 04:16:15.986908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.416  [2024-12-09 04:16:15.986936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.416  [2024-12-09 04:16:15.986952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.416  [2024-12-09 04:16:15.987185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.416  [2024-12-09 04:16:15.987432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.416  [2024-12-09 04:16:15.987453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.416  [2024-12-09 04:16:15.987466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.416  [2024-12-09 04:16:15.987478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.674  [2024-12-09 04:16:15.999585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.674  [2024-12-09 04:16:16.000006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.674  [2024-12-09 04:16:16.000033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.674  [2024-12-09 04:16:16.000048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.674  [2024-12-09 04:16:16.000295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.674  [2024-12-09 04:16:16.000512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.674  [2024-12-09 04:16:16.000531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.674  [2024-12-09 04:16:16.000544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.674  [2024-12-09 04:16:16.000556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.674  [2024-12-09 04:16:16.012750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.674  [2024-12-09 04:16:16.013167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.674  [2024-12-09 04:16:16.013228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.674  [2024-12-09 04:16:16.013244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.013511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.013740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.013758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.013770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.013782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.026002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.026376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.026404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.026420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.026658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.026870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.026889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.026901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.026912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.039118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.039471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.039499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.039515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.039760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.039958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.039976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.039988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.039999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.052185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.052577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.052619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.052635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.052872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.053082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.053101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.053113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.053125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.065350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.065713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.065740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.065755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.065993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.066205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.066223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.066235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.066247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.078509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.078811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.078852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.078868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.079086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.079328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.079353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.079366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.079378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.091587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.091971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.092012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.092028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.092256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.092495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.092515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.092527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.092539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.104765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.105260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.105310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.105327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.105596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.105792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.105810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.105822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.105833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.117832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.118151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.118177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.118192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.118440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.118660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.118679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.118691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.118706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.131166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.131546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.675  [2024-12-09 04:16:16.131590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.675  [2024-12-09 04:16:16.131839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.675  [2024-12-09 04:16:16.132040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.675  [2024-12-09 04:16:16.132059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.675  [2024-12-09 04:16:16.132071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.675  [2024-12-09 04:16:16.132083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.675  [2024-12-09 04:16:16.144358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.675  [2024-12-09 04:16:16.144745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.675  [2024-12-09 04:16:16.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.144803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.145074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.145298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.145333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.145347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.145359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.676  [2024-12-09 04:16:16.157452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.676  [2024-12-09 04:16:16.157822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.676  [2024-12-09 04:16:16.157864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.157879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.158150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.158374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.158394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.158407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.158419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.676  [2024-12-09 04:16:16.170596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.676  [2024-12-09 04:16:16.170962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.676  [2024-12-09 04:16:16.170989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.171019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.171264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.171504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.171525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.171538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.171550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.676  [2024-12-09 04:16:16.183719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.676  [2024-12-09 04:16:16.184212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.676  [2024-12-09 04:16:16.184252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.184269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.184538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.184770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.184788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.184800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.184812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.676  [2024-12-09 04:16:16.196756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.676  [2024-12-09 04:16:16.197201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.676  [2024-12-09 04:16:16.197229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.197245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.197472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.197694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.197715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.197728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.197741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.676  [2024-12-09 04:16:16.209916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.676  [2024-12-09 04:16:16.210283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.676  [2024-12-09 04:16:16.210311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.210327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.210577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.210773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.210791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.210803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.210814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.676  [2024-12-09 04:16:16.223184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.676  [2024-12-09 04:16:16.223642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.676  [2024-12-09 04:16:16.223684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.223700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.223955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.224166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.224184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.224196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.224207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.676  [2024-12-09 04:16:16.236450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.676  [2024-12-09 04:16:16.236854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.676  [2024-12-09 04:16:16.236897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.676  [2024-12-09 04:16:16.236913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.676  [2024-12-09 04:16:16.237138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.676  [2024-12-09 04:16:16.237385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.676  [2024-12-09 04:16:16.237406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.676  [2024-12-09 04:16:16.237419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.676  [2024-12-09 04:16:16.237431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.935  [2024-12-09 04:16:16.250049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.935  [2024-12-09 04:16:16.250495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.935  [2024-12-09 04:16:16.250524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.935  [2024-12-09 04:16:16.250541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.935  [2024-12-09 04:16:16.250783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.935  [2024-12-09 04:16:16.250994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.935  [2024-12-09 04:16:16.251018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.935  [2024-12-09 04:16:16.251031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.935  [2024-12-09 04:16:16.251042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.935  [2024-12-09 04:16:16.263276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.935  [2024-12-09 04:16:16.263620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.935  [2024-12-09 04:16:16.263648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.935  [2024-12-09 04:16:16.263664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.935  [2024-12-09 04:16:16.263893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.935  [2024-12-09 04:16:16.264106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.935  [2024-12-09 04:16:16.264124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.935  [2024-12-09 04:16:16.264136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.935  [2024-12-09 04:16:16.264148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.935  [2024-12-09 04:16:16.276423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.935  [2024-12-09 04:16:16.276843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.935  [2024-12-09 04:16:16.276884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.935  [2024-12-09 04:16:16.276901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.935  [2024-12-09 04:16:16.277145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.935  [2024-12-09 04:16:16.277375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.935  [2024-12-09 04:16:16.277396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.935  [2024-12-09 04:16:16.277409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.935  [2024-12-09 04:16:16.277421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.935  [2024-12-09 04:16:16.289698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.935  [2024-12-09 04:16:16.290192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.935  [2024-12-09 04:16:16.290219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.935  [2024-12-09 04:16:16.290251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.935  [2024-12-09 04:16:16.290490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.290722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.290740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.290752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.290768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.302932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.303387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.303429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.303446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.303696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.303892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.303911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.303923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.303935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.316230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.316619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.316662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.316677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.316932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.317142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.317161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.317173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.317185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.329432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.329813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.329854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.329870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.330095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.330335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.330369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.330383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.330395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.342715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.343092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.343135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.343151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.343420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.343643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.343662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.343675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.343686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.355886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.356312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.356340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.356354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.356606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.356817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.356835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.356847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.356858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.368926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.369468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.369496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.369512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.369752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.369964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.369983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.369995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.370006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.382155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.382552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.382596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.382612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.382881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.383077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.383096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.383108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.383119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.395264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.395699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.395726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.395742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.395982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.396193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.396212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.396224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.396235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.408512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.408848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.408876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.408891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.936  [2024-12-09 04:16:16.409111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.936  [2024-12-09 04:16:16.409350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.936  [2024-12-09 04:16:16.409371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.936  [2024-12-09 04:16:16.409384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.936  [2024-12-09 04:16:16.409396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.936  [2024-12-09 04:16:16.421846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.936  [2024-12-09 04:16:16.422252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.936  [2024-12-09 04:16:16.422327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.936  [2024-12-09 04:16:16.422344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.937  [2024-12-09 04:16:16.422606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.937  [2024-12-09 04:16:16.422820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.937  [2024-12-09 04:16:16.422844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.937  [2024-12-09 04:16:16.422857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.937  [2024-12-09 04:16:16.422868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.937  [2024-12-09 04:16:16.435036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.937  [2024-12-09 04:16:16.435385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.937  [2024-12-09 04:16:16.435414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.937  [2024-12-09 04:16:16.435430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.937  [2024-12-09 04:16:16.435662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.937  [2024-12-09 04:16:16.435873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.937  [2024-12-09 04:16:16.435892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.937  [2024-12-09 04:16:16.435904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.937  [2024-12-09 04:16:16.435915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.937  [2024-12-09 04:16:16.448191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.937  [2024-12-09 04:16:16.448621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.937  [2024-12-09 04:16:16.448681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.937  [2024-12-09 04:16:16.448697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.937  [2024-12-09 04:16:16.448941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.937  [2024-12-09 04:16:16.449181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.937  [2024-12-09 04:16:16.449202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.937  [2024-12-09 04:16:16.449216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.937  [2024-12-09 04:16:16.449229] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.937  [2024-12-09 04:16:16.461525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.937  [2024-12-09 04:16:16.461896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.937  [2024-12-09 04:16:16.461939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.937  [2024-12-09 04:16:16.461955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.937  [2024-12-09 04:16:16.462225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.937  [2024-12-09 04:16:16.462454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.937  [2024-12-09 04:16:16.462475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.937  [2024-12-09 04:16:16.462488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.937  [2024-12-09 04:16:16.462505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.937  [2024-12-09 04:16:16.474819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.937  [2024-12-09 04:16:16.475186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.937  [2024-12-09 04:16:16.475227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.937  [2024-12-09 04:16:16.475243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.937  [2024-12-09 04:16:16.475527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.937  [2024-12-09 04:16:16.475743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.937  [2024-12-09 04:16:16.475762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.937  [2024-12-09 04:16:16.475774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.937  [2024-12-09 04:16:16.475785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.937  [2024-12-09 04:16:16.487950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.937  [2024-12-09 04:16:16.488409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.937  [2024-12-09 04:16:16.488438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.937  [2024-12-09 04:16:16.488454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.937  [2024-12-09 04:16:16.488713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.937  [2024-12-09 04:16:16.488909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.937  [2024-12-09 04:16:16.488928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.937  [2024-12-09 04:16:16.488940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.937  [2024-12-09 04:16:16.488951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:47.937  [2024-12-09 04:16:16.501195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:47.937  [2024-12-09 04:16:16.501549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:47.937  [2024-12-09 04:16:16.501576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:47.937  [2024-12-09 04:16:16.501592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:47.937  [2024-12-09 04:16:16.501817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:47.937  [2024-12-09 04:16:16.502029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:47.937  [2024-12-09 04:16:16.502048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:47.937  [2024-12-09 04:16:16.502060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:47.937  [2024-12-09 04:16:16.502071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.196  [2024-12-09 04:16:16.514724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.196  [2024-12-09 04:16:16.515109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.196  [2024-12-09 04:16:16.515138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.196  [2024-12-09 04:16:16.515154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.196  [2024-12-09 04:16:16.515383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.196  [2024-12-09 04:16:16.515662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.196  [2024-12-09 04:16:16.515682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.196  [2024-12-09 04:16:16.515695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.196  [2024-12-09 04:16:16.515707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.196  [2024-12-09 04:16:16.527930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.196  [2024-12-09 04:16:16.528300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.196  [2024-12-09 04:16:16.528342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.196  [2024-12-09 04:16:16.528358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.196  [2024-12-09 04:16:16.528627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.196  [2024-12-09 04:16:16.528823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.528842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.528854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.528865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.541195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.541622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.541649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.197  [2024-12-09 04:16:16.541664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.197  [2024-12-09 04:16:16.541904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.197  [2024-12-09 04:16:16.542116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.542134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.542146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.542157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.554314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.554667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.554732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.197  [2024-12-09 04:16:16.554747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.197  [2024-12-09 04:16:16.554981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.197  [2024-12-09 04:16:16.555177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.555196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.555207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.555218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.567551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.567905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.567947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.197  [2024-12-09 04:16:16.567962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.197  [2024-12-09 04:16:16.568213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.197  [2024-12-09 04:16:16.568475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.568497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.568511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.568523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.580733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.581226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.581286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.197  [2024-12-09 04:16:16.581303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.197  [2024-12-09 04:16:16.581534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.197  [2024-12-09 04:16:16.581745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.581764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.581775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.581787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.593791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.594269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.594327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.197  [2024-12-09 04:16:16.594343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.197  [2024-12-09 04:16:16.594607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.197  [2024-12-09 04:16:16.594803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.594826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.594839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.594850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.606894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.607366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.607395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.197  [2024-12-09 04:16:16.607410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.197  [2024-12-09 04:16:16.607659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.197  [2024-12-09 04:16:16.607855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.607874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.607886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.607897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.619962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.620332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.620375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.197  [2024-12-09 04:16:16.620391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.197  [2024-12-09 04:16:16.620641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.197  [2024-12-09 04:16:16.620854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.197  [2024-12-09 04:16:16.620872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.197  [2024-12-09 04:16:16.620885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.197  [2024-12-09 04:16:16.620896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.197  [2024-12-09 04:16:16.633347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.197  [2024-12-09 04:16:16.633685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.197  [2024-12-09 04:16:16.633727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.633743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.633969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.634187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.634206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.634218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.198  [2024-12-09 04:16:16.634238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.198  [2024-12-09 04:16:16.646444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.198  [2024-12-09 04:16:16.646811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.198  [2024-12-09 04:16:16.646853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.646870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.647139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.647380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.647408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.647421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.198  [2024-12-09 04:16:16.647433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.198  [2024-12-09 04:16:16.659580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.198  [2024-12-09 04:16:16.659916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.198  [2024-12-09 04:16:16.659944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.659959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.660184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.660444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.660464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.660477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.198  [2024-12-09 04:16:16.660489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.198  [2024-12-09 04:16:16.672666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.198  [2024-12-09 04:16:16.673072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.198  [2024-12-09 04:16:16.673100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.673116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.673349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.673557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.673576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.673589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.198  [2024-12-09 04:16:16.673601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.198  [2024-12-09 04:16:16.685851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.198  [2024-12-09 04:16:16.686346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.198  [2024-12-09 04:16:16.686388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.686405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.686657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.686852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.686870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.686882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.198  [2024-12-09 04:16:16.686893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.198  [2024-12-09 04:16:16.699218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.198  [2024-12-09 04:16:16.699583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.198  [2024-12-09 04:16:16.699612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.699628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.699889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.700115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.700135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.700148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.198  [2024-12-09 04:16:16.700176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.198  [2024-12-09 04:16:16.712523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.198  [2024-12-09 04:16:16.712828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.198  [2024-12-09 04:16:16.712868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.712883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.713086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.713356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.713377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.713389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.198  [2024-12-09 04:16:16.713401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.198  [2024-12-09 04:16:16.725700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.198  [2024-12-09 04:16:16.726066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.198  [2024-12-09 04:16:16.726108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.198  [2024-12-09 04:16:16.726124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.198  [2024-12-09 04:16:16.726399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.198  [2024-12-09 04:16:16.726608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.198  [2024-12-09 04:16:16.726642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.198  [2024-12-09 04:16:16.726655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.199  [2024-12-09 04:16:16.726667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.199  [2024-12-09 04:16:16.738824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.199  [2024-12-09 04:16:16.739319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.199  [2024-12-09 04:16:16.739362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.199  [2024-12-09 04:16:16.739379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.199  [2024-12-09 04:16:16.739620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.199  [2024-12-09 04:16:16.739837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.199  [2024-12-09 04:16:16.739856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.199  [2024-12-09 04:16:16.739868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.199  [2024-12-09 04:16:16.739880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.199  [2024-12-09 04:16:16.751908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.199  [2024-12-09 04:16:16.752223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.199  [2024-12-09 04:16:16.752264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.199  [2024-12-09 04:16:16.752290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.199  [2024-12-09 04:16:16.752538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.199  [2024-12-09 04:16:16.752767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.199  [2024-12-09 04:16:16.752786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.199  [2024-12-09 04:16:16.752798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.199  [2024-12-09 04:16:16.752810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.199  [2024-12-09 04:16:16.765078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.199  [2024-12-09 04:16:16.765473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.199  [2024-12-09 04:16:16.765524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.199  [2024-12-09 04:16:16.765540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.199  [2024-12-09 04:16:16.765766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.199  [2024-12-09 04:16:16.765978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.199  [2024-12-09 04:16:16.766002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.199  [2024-12-09 04:16:16.766015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.199  [2024-12-09 04:16:16.766026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.458  [2024-12-09 04:16:16.778721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.458  [2024-12-09 04:16:16.779160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.458  [2024-12-09 04:16:16.779202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.458  [2024-12-09 04:16:16.779219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.458  [2024-12-09 04:16:16.779478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.458  [2024-12-09 04:16:16.779694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.458  [2024-12-09 04:16:16.779712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.458  [2024-12-09 04:16:16.779724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.458  [2024-12-09 04:16:16.779736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.458  [2024-12-09 04:16:16.792018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.458  [2024-12-09 04:16:16.792383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.458  [2024-12-09 04:16:16.792412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.458  [2024-12-09 04:16:16.792428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.458  [2024-12-09 04:16:16.792672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.458  [2024-12-09 04:16:16.792884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.458  [2024-12-09 04:16:16.792902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.458  [2024-12-09 04:16:16.792914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.458  [2024-12-09 04:16:16.792925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.458  [2024-12-09 04:16:16.805649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.458  [2024-12-09 04:16:16.806065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.458  [2024-12-09 04:16:16.806124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.806139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.806396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.459  [2024-12-09 04:16:16.806633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.459  [2024-12-09 04:16:16.806651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.459  [2024-12-09 04:16:16.806663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.459  [2024-12-09 04:16:16.806679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.459  [2024-12-09 04:16:16.818890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.459  [2024-12-09 04:16:16.819230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.459  [2024-12-09 04:16:16.819257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.819296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.819544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.459  [2024-12-09 04:16:16.819773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.459  [2024-12-09 04:16:16.819793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.459  [2024-12-09 04:16:16.819805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.459  [2024-12-09 04:16:16.819817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.459  [2024-12-09 04:16:16.832207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.459  [2024-12-09 04:16:16.832714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.459  [2024-12-09 04:16:16.832765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.832780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.833044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.459  [2024-12-09 04:16:16.833249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.459  [2024-12-09 04:16:16.833294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.459  [2024-12-09 04:16:16.833308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.459  [2024-12-09 04:16:16.833334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.459  [2024-12-09 04:16:16.845538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.459  [2024-12-09 04:16:16.845977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.459  [2024-12-09 04:16:16.846044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.846060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.846303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.459  [2024-12-09 04:16:16.846519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.459  [2024-12-09 04:16:16.846538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.459  [2024-12-09 04:16:16.846550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.459  [2024-12-09 04:16:16.846562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.459       5300.25 IOPS,    20.70 MiB/s
[2024-12-09T03:16:17.035Z] [2024-12-09 04:16:16.858863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.459  [2024-12-09 04:16:16.859234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.459  [2024-12-09 04:16:16.859285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.859303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.859541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.459  [2024-12-09 04:16:16.859755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.459  [2024-12-09 04:16:16.859773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.459  [2024-12-09 04:16:16.859785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.459  [2024-12-09 04:16:16.859796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.459  [2024-12-09 04:16:16.872204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.459  [2024-12-09 04:16:16.872718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.459  [2024-12-09 04:16:16.872770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.872786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.873058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.459  [2024-12-09 04:16:16.873268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.459  [2024-12-09 04:16:16.873312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.459  [2024-12-09 04:16:16.873325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.459  [2024-12-09 04:16:16.873336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.459  [2024-12-09 04:16:16.885501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.459  [2024-12-09 04:16:16.885959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.459  [2024-12-09 04:16:16.886010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.886025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.886301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.459  [2024-12-09 04:16:16.886517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.459  [2024-12-09 04:16:16.886536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.459  [2024-12-09 04:16:16.886549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.459  [2024-12-09 04:16:16.886561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.459  [2024-12-09 04:16:16.899050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.459  [2024-12-09 04:16:16.899411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.459  [2024-12-09 04:16:16.899439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.459  [2024-12-09 04:16:16.899461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.459  [2024-12-09 04:16:16.899693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.899916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.899936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.899948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.460  [2024-12-09 04:16:16.899960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.460  [2024-12-09 04:16:16.912658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.460  [2024-12-09 04:16:16.913077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.460  [2024-12-09 04:16:16.913129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.460  [2024-12-09 04:16:16.913145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.460  [2024-12-09 04:16:16.913397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.913630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.913649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.913660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.460  [2024-12-09 04:16:16.913671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.460  [2024-12-09 04:16:16.926075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.460  [2024-12-09 04:16:16.926422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.460  [2024-12-09 04:16:16.926472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.460  [2024-12-09 04:16:16.926489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.460  [2024-12-09 04:16:16.926747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.926943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.926961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.926973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.460  [2024-12-09 04:16:16.926984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.460  [2024-12-09 04:16:16.939376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.460  [2024-12-09 04:16:16.939867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.460  [2024-12-09 04:16:16.939895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.460  [2024-12-09 04:16:16.939911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.460  [2024-12-09 04:16:16.940154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.940405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.940431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.940446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.460  [2024-12-09 04:16:16.940458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.460  [2024-12-09 04:16:16.952677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.460  [2024-12-09 04:16:16.953037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.460  [2024-12-09 04:16:16.953066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.460  [2024-12-09 04:16:16.953082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.460  [2024-12-09 04:16:16.953311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.953534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.953554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.953568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.460  [2024-12-09 04:16:16.953581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.460  [2024-12-09 04:16:16.966085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.460  [2024-12-09 04:16:16.966445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.460  [2024-12-09 04:16:16.966473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.460  [2024-12-09 04:16:16.966488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.460  [2024-12-09 04:16:16.966727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.966939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.966957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.966969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.460  [2024-12-09 04:16:16.966980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.460  [2024-12-09 04:16:16.979170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.460  [2024-12-09 04:16:16.979696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.460  [2024-12-09 04:16:16.979723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.460  [2024-12-09 04:16:16.979754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.460  [2024-12-09 04:16:16.980007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.980218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.980236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.980262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.460  [2024-12-09 04:16:16.980288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.460  [2024-12-09 04:16:16.992353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.460  [2024-12-09 04:16:16.992735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.460  [2024-12-09 04:16:16.992762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.460  [2024-12-09 04:16:16.992792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.460  [2024-12-09 04:16:16.993016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.460  [2024-12-09 04:16:16.993228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.460  [2024-12-09 04:16:16.993246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.460  [2024-12-09 04:16:16.993258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.461  [2024-12-09 04:16:16.993270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.461  [2024-12-09 04:16:17.005477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.461  [2024-12-09 04:16:17.005814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.461  [2024-12-09 04:16:17.005841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.461  [2024-12-09 04:16:17.005856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.461  [2024-12-09 04:16:17.006080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.461  [2024-12-09 04:16:17.006317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.461  [2024-12-09 04:16:17.006337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.461  [2024-12-09 04:16:17.006364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.461  [2024-12-09 04:16:17.006376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.461  [2024-12-09 04:16:17.018701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.461  [2024-12-09 04:16:17.019034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.461  [2024-12-09 04:16:17.019063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.461  [2024-12-09 04:16:17.019078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.461  [2024-12-09 04:16:17.019308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.461  [2024-12-09 04:16:17.019531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.461  [2024-12-09 04:16:17.019551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.461  [2024-12-09 04:16:17.019564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.461  [2024-12-09 04:16:17.019576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.461  [2024-12-09 04:16:17.032445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.461  [2024-12-09 04:16:17.032801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.461  [2024-12-09 04:16:17.032829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.461  [2024-12-09 04:16:17.032845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.461  [2024-12-09 04:16:17.033078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.720  [2024-12-09 04:16:17.033349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.720  [2024-12-09 04:16:17.033371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.720  [2024-12-09 04:16:17.033400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.720  [2024-12-09 04:16:17.033412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.720  [2024-12-09 04:16:17.045667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.720  [2024-12-09 04:16:17.045988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.720  [2024-12-09 04:16:17.046015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.720  [2024-12-09 04:16:17.046030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.720  [2024-12-09 04:16:17.046248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.720  [2024-12-09 04:16:17.046477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.720  [2024-12-09 04:16:17.046498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.720  [2024-12-09 04:16:17.046511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.720  [2024-12-09 04:16:17.046522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.720  [2024-12-09 04:16:17.058813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.720  [2024-12-09 04:16:17.059180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.720  [2024-12-09 04:16:17.059221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.720  [2024-12-09 04:16:17.059237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.720  [2024-12-09 04:16:17.059518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.720  [2024-12-09 04:16:17.059731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.720  [2024-12-09 04:16:17.059750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.720  [2024-12-09 04:16:17.059762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.720  [2024-12-09 04:16:17.059774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.720  [2024-12-09 04:16:17.071932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.720  [2024-12-09 04:16:17.072303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.720  [2024-12-09 04:16:17.072344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.720  [2024-12-09 04:16:17.072364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.720  [2024-12-09 04:16:17.072609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.720  [2024-12-09 04:16:17.072805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.720  [2024-12-09 04:16:17.072823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.720  [2024-12-09 04:16:17.072835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.720  [2024-12-09 04:16:17.072846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.720  [2024-12-09 04:16:17.085094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.720  [2024-12-09 04:16:17.085436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.720  [2024-12-09 04:16:17.085464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.085479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.085706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.085918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.721  [2024-12-09 04:16:17.085936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.721  [2024-12-09 04:16:17.085948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.721  [2024-12-09 04:16:17.085960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.721  [2024-12-09 04:16:17.098251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.721  [2024-12-09 04:16:17.098622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.721  [2024-12-09 04:16:17.098649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.098665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.098904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.099133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.721  [2024-12-09 04:16:17.099152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.721  [2024-12-09 04:16:17.099164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.721  [2024-12-09 04:16:17.099176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.721  [2024-12-09 04:16:17.111372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.721  [2024-12-09 04:16:17.111737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.721  [2024-12-09 04:16:17.111764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.111779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.112017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.112220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.721  [2024-12-09 04:16:17.112243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.721  [2024-12-09 04:16:17.112256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.721  [2024-12-09 04:16:17.112267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.721  [2024-12-09 04:16:17.124672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.721  [2024-12-09 04:16:17.125066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.721  [2024-12-09 04:16:17.125094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.125109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.125353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.125568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.721  [2024-12-09 04:16:17.125588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.721  [2024-12-09 04:16:17.125601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.721  [2024-12-09 04:16:17.125613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.721  [2024-12-09 04:16:17.137864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.721  [2024-12-09 04:16:17.138291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.721  [2024-12-09 04:16:17.138336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.138352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.138597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.138809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.721  [2024-12-09 04:16:17.138828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.721  [2024-12-09 04:16:17.138839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.721  [2024-12-09 04:16:17.138851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.721  [2024-12-09 04:16:17.151089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.721  [2024-12-09 04:16:17.151461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.721  [2024-12-09 04:16:17.151504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.151520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.151789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.151985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.721  [2024-12-09 04:16:17.152004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.721  [2024-12-09 04:16:17.152015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.721  [2024-12-09 04:16:17.152031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.721  [2024-12-09 04:16:17.164213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.721  [2024-12-09 04:16:17.164734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.721  [2024-12-09 04:16:17.164777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.164793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.165061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.165257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.721  [2024-12-09 04:16:17.165299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.721  [2024-12-09 04:16:17.165314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.721  [2024-12-09 04:16:17.165325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.721  [2024-12-09 04:16:17.177345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.721  [2024-12-09 04:16:17.177735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.721  [2024-12-09 04:16:17.177776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.721  [2024-12-09 04:16:17.177792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.721  [2024-12-09 04:16:17.178017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.721  [2024-12-09 04:16:17.178228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.178247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.178259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.178279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.190467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.190963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.190989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.191021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.191298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.191520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.191540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.191553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.191565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.203648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.204034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.204062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.204078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.204336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.204559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.204579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.204593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.204606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.216988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.217377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.217405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.217420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.217646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.217842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.217860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.217872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.217883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.230175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.230697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.230740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.230756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.231005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.231200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.231218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.231230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.231241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.243325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.243734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.243776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.243797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.244039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.244234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.244253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.244265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.244301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.256454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.256945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.256986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.257002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.257256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.257497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.257516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.257528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.257540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.269517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.269848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.269876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.269891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.270115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.270353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.722  [2024-12-09 04:16:17.270373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.722  [2024-12-09 04:16:17.270386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.722  [2024-12-09 04:16:17.270398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.722  [2024-12-09 04:16:17.282577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.722  [2024-12-09 04:16:17.282889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.722  [2024-12-09 04:16:17.282930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.722  [2024-12-09 04:16:17.282945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.722  [2024-12-09 04:16:17.283163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.722  [2024-12-09 04:16:17.283421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.723  [2024-12-09 04:16:17.283449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.723  [2024-12-09 04:16:17.283462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.723  [2024-12-09 04:16:17.283474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.982  [2024-12-09 04:16:17.296422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.982  [2024-12-09 04:16:17.296800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.982  [2024-12-09 04:16:17.296827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.982  [2024-12-09 04:16:17.296843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.982  [2024-12-09 04:16:17.297068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.982  [2024-12-09 04:16:17.297306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.982  [2024-12-09 04:16:17.297342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.982  [2024-12-09 04:16:17.297355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.982  [2024-12-09 04:16:17.297367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.982  [2024-12-09 04:16:17.309594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.982  [2024-12-09 04:16:17.310084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.982  [2024-12-09 04:16:17.310125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.982  [2024-12-09 04:16:17.310141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.982  [2024-12-09 04:16:17.310417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.982  [2024-12-09 04:16:17.310626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.982  [2024-12-09 04:16:17.310646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.982  [2024-12-09 04:16:17.310659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.982  [2024-12-09 04:16:17.310671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.982  [2024-12-09 04:16:17.322672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.982  [2024-12-09 04:16:17.323164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.982  [2024-12-09 04:16:17.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.982  [2024-12-09 04:16:17.323222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.982  [2024-12-09 04:16:17.323474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.982  [2024-12-09 04:16:17.323705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.982  [2024-12-09 04:16:17.323724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.982  [2024-12-09 04:16:17.323736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.982  [2024-12-09 04:16:17.323752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.982  [2024-12-09 04:16:17.335904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.982  [2024-12-09 04:16:17.336277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.982  [2024-12-09 04:16:17.336306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.982  [2024-12-09 04:16:17.336322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.982  [2024-12-09 04:16:17.336567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.982  [2024-12-09 04:16:17.336778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.982  [2024-12-09 04:16:17.336797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.982  [2024-12-09 04:16:17.336809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.982  [2024-12-09 04:16:17.336820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.982  [2024-12-09 04:16:17.349052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.982  [2024-12-09 04:16:17.349465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.982  [2024-12-09 04:16:17.349506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.982  [2024-12-09 04:16:17.349522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.982  [2024-12-09 04:16:17.349761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.982  [2024-12-09 04:16:17.349956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.982  [2024-12-09 04:16:17.349975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.982  [2024-12-09 04:16:17.349986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.982  [2024-12-09 04:16:17.349997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.982  [2024-12-09 04:16:17.362130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.982  [2024-12-09 04:16:17.362625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.982  [2024-12-09 04:16:17.362666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.982  [2024-12-09 04:16:17.362682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.982  [2024-12-09 04:16:17.362929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.982  [2024-12-09 04:16:17.363125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.982  [2024-12-09 04:16:17.363143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.363155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.363166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.375331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.375742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.375785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.375801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.376056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.376266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.376309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.376322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.376334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.388649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.389052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.389119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.389135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.389386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.389624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.389656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.389669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.389681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.402184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.402566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.402595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.402611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.402856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.403073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.403092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.403104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.403115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.415377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.415769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.415812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.415829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.416067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.416311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.416331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.416344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.416356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.429000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.429343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.429373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.429389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.429621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.429840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.429859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.429871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.429883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.442355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.442805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.442846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.442863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.443106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.443350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.443371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.443385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.443397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.455752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.456130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.456167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.456201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.456430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.456681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.456708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.456722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.456736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.469036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.469400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.469416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.469641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.469860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.469879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.469891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.469903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.482606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.482997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.483025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.483041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.983  [2024-12-09 04:16:17.483294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.983  [2024-12-09 04:16:17.483496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.983  [2024-12-09 04:16:17.483515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.983  [2024-12-09 04:16:17.483527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.983  [2024-12-09 04:16:17.483538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.983  [2024-12-09 04:16:17.495971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.983  [2024-12-09 04:16:17.496363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.983  [2024-12-09 04:16:17.496393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.983  [2024-12-09 04:16:17.496409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.984  [2024-12-09 04:16:17.496640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.984  [2024-12-09 04:16:17.496851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.984  [2024-12-09 04:16:17.496870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.984  [2024-12-09 04:16:17.496882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.984  [2024-12-09 04:16:17.496898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.984  [2024-12-09 04:16:17.509345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.984  [2024-12-09 04:16:17.509759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.984  [2024-12-09 04:16:17.509802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.984  [2024-12-09 04:16:17.509819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.984  [2024-12-09 04:16:17.510088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.984  [2024-12-09 04:16:17.510311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.984  [2024-12-09 04:16:17.510348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.984  [2024-12-09 04:16:17.510361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.984  [2024-12-09 04:16:17.510373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.984  [2024-12-09 04:16:17.522670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.984  [2024-12-09 04:16:17.523041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.984  [2024-12-09 04:16:17.523069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.984  [2024-12-09 04:16:17.523085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.984  [2024-12-09 04:16:17.523340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.984  [2024-12-09 04:16:17.523548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.984  [2024-12-09 04:16:17.523567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.984  [2024-12-09 04:16:17.523580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.984  [2024-12-09 04:16:17.523607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.984  [2024-12-09 04:16:17.535937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.984  [2024-12-09 04:16:17.536276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.984  [2024-12-09 04:16:17.536304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.984  [2024-12-09 04:16:17.536320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.984  [2024-12-09 04:16:17.536545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.984  [2024-12-09 04:16:17.536756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.984  [2024-12-09 04:16:17.536774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.984  [2024-12-09 04:16:17.536786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.984  [2024-12-09 04:16:17.536797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:48.984  [2024-12-09 04:16:17.548968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:48.984  [2024-12-09 04:16:17.549368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:48.984  [2024-12-09 04:16:17.549396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:48.984  [2024-12-09 04:16:17.549412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:48.984  [2024-12-09 04:16:17.549637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:48.984  [2024-12-09 04:16:17.549866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:48.984  [2024-12-09 04:16:17.549884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:48.984  [2024-12-09 04:16:17.549897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:48.984  [2024-12-09 04:16:17.549908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.243  [2024-12-09 04:16:17.562297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.562734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.562761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.562791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.563024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.563261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.563305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.563319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.563331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.575873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.576186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.576212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.576227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.576482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.576719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.576737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.576750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.576761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.589028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.589406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.589448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.589463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.589719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.589930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.589948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.589960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.589972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.602243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.602581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.602608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.602624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.602850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.603062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.603080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.603092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.603103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.615326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.615660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.615687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.615703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.615928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.616141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.616160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.616172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.616183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.628470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.628874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.628916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.628933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.629164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.629426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.629453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.629467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.629479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.641626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.641986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.642028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.642043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.642297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.642513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.642532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.642544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.642556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.654662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.655028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.655070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.655086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.655370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.655607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.655628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.655658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.655670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.667777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.668175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.668242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.668257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.668516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.668746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.668765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.244  [2024-12-09 04:16:17.668777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.244  [2024-12-09 04:16:17.668792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.244  [2024-12-09 04:16:17.680841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.244  [2024-12-09 04:16:17.681138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.244  [2024-12-09 04:16:17.681164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.244  [2024-12-09 04:16:17.681179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.244  [2024-12-09 04:16:17.681420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.244  [2024-12-09 04:16:17.681655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.244  [2024-12-09 04:16:17.681674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.681686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.681698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.694042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.694434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.694476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.694492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.694743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.694938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.694957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.694969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.694980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.707206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.707565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.707593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.707609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.707840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.708097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.708118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.708132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.708145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.720711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.721146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.721189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.721206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.721461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.721678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.721696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.721708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.721720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.733876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.734236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.734263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.734304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.734549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.734777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.734796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.734808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.734819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.747043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.747456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.747484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.747500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.747746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.747942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.747960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.747972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.747983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.760104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.760537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.760564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.760580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.760822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.761034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.761053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.761065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.761075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.773186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.773522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.773548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.773563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.773775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.773985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.774003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.774015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.774027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.786359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.786721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.786748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.786764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.786992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.787204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.787223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.787234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.787245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.799416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.799809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.799837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.799853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.800098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.800336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.245  [2024-12-09 04:16:17.800361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.245  [2024-12-09 04:16:17.800374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.245  [2024-12-09 04:16:17.800386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.245  [2024-12-09 04:16:17.812516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.245  [2024-12-09 04:16:17.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.245  [2024-12-09 04:16:17.813009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.245  [2024-12-09 04:16:17.813025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.245  [2024-12-09 04:16:17.813262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.245  [2024-12-09 04:16:17.813503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.246  [2024-12-09 04:16:17.813522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.246  [2024-12-09 04:16:17.813535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.246  [2024-12-09 04:16:17.813546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.504  [2024-12-09 04:16:17.826182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.504  [2024-12-09 04:16:17.826702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.504  [2024-12-09 04:16:17.826744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.504  [2024-12-09 04:16:17.826761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.504  [2024-12-09 04:16:17.827010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.504  [2024-12-09 04:16:17.827205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.504  [2024-12-09 04:16:17.827224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.504  [2024-12-09 04:16:17.827236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.504  [2024-12-09 04:16:17.827247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.504  [2024-12-09 04:16:17.839255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.504  [2024-12-09 04:16:17.839600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.504  [2024-12-09 04:16:17.839627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.504  [2024-12-09 04:16:17.839642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.504  [2024-12-09 04:16:17.839845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.504  [2024-12-09 04:16:17.840057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.504  [2024-12-09 04:16:17.840075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.504  [2024-12-09 04:16:17.840087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.504  [2024-12-09 04:16:17.840103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.504       4240.20 IOPS,    16.56 MiB/s
[2024-12-09T03:16:18.080Z] [2024-12-09 04:16:17.853635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.853938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.853979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.853994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.854213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.854452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.854472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.854484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.854495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.866856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.867224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.867266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.867290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.867537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.867769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.867787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.867799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.867810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.880069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.880490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.880517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.880532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.880774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.880984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.881003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.881015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.881026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.893233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.893611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.893654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.893669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.893939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.894135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.894153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.894166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.894177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.906395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.906769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.906796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.906812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.907049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.907260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.907302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.907317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.907329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.920003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.920388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.920417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.920433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.920678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.920907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.920925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.920937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.920948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.933360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.933745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.933787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.933810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.934057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.934268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.934297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.934310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.934338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.946989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.947433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.947450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.947697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.947917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.947938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.947951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.947962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.960290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.960708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.960735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.960766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.961012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.961282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.961304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.961318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.961331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.505  [2024-12-09 04:16:17.973613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.505  [2024-12-09 04:16:17.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.505  [2024-12-09 04:16:17.973997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.505  [2024-12-09 04:16:17.974013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.505  [2024-12-09 04:16:17.974237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.505  [2024-12-09 04:16:17.974469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.505  [2024-12-09 04:16:17.974489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.505  [2024-12-09 04:16:17.974501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.505  [2024-12-09 04:16:17.974513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.506  [2024-12-09 04:16:17.986891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.506  [2024-12-09 04:16:17.987324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.506  [2024-12-09 04:16:17.987354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.506  [2024-12-09 04:16:17.987369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.506  [2024-12-09 04:16:17.987602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.506  [2024-12-09 04:16:17.987815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.506  [2024-12-09 04:16:17.987834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.506  [2024-12-09 04:16:17.987846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.506  [2024-12-09 04:16:17.987857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.506  [2024-12-09 04:16:18.000187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.506  [2024-12-09 04:16:18.000592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.506  [2024-12-09 04:16:18.000643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.506  [2024-12-09 04:16:18.000658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.506  [2024-12-09 04:16:18.000893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.506  [2024-12-09 04:16:18.001122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.506  [2024-12-09 04:16:18.001141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.506  [2024-12-09 04:16:18.001154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.506  [2024-12-09 04:16:18.001165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.506  [2024-12-09 04:16:18.013485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.506  [2024-12-09 04:16:18.013874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.506  [2024-12-09 04:16:18.013916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.506  [2024-12-09 04:16:18.013932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.506  [2024-12-09 04:16:18.014186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.506  [2024-12-09 04:16:18.014428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.506  [2024-12-09 04:16:18.014448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.506  [2024-12-09 04:16:18.014461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.506  [2024-12-09 04:16:18.014477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.506  [2024-12-09 04:16:18.026789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.506  [2024-12-09 04:16:18.027220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.506  [2024-12-09 04:16:18.027262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.506  [2024-12-09 04:16:18.027290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.506  [2024-12-09 04:16:18.027523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.506  [2024-12-09 04:16:18.027770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.506  [2024-12-09 04:16:18.027789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.506  [2024-12-09 04:16:18.027802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.506  [2024-12-09 04:16:18.027814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.506  [2024-12-09 04:16:18.040085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.506  [2024-12-09 04:16:18.040526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.506  [2024-12-09 04:16:18.040578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.506  [2024-12-09 04:16:18.040594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.506  [2024-12-09 04:16:18.040858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.506  [2024-12-09 04:16:18.041053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.506  [2024-12-09 04:16:18.041072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.506  [2024-12-09 04:16:18.041084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.506  [2024-12-09 04:16:18.041095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.506  [2024-12-09 04:16:18.053374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.506  [2024-12-09 04:16:18.053867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.506  [2024-12-09 04:16:18.053920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.506  [2024-12-09 04:16:18.053936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.506  [2024-12-09 04:16:18.054201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.506  [2024-12-09 04:16:18.054427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.506  [2024-12-09 04:16:18.054447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.506  [2024-12-09 04:16:18.054460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.506  [2024-12-09 04:16:18.054471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.506  [2024-12-09 04:16:18.066678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.506  [2024-12-09 04:16:18.067082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.506  [2024-12-09 04:16:18.067108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.506  [2024-12-09 04:16:18.067123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.506  [2024-12-09 04:16:18.067382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.506  [2024-12-09 04:16:18.067603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.506  [2024-12-09 04:16:18.067622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.506  [2024-12-09 04:16:18.067649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.506  [2024-12-09 04:16:18.067660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.765  [2024-12-09 04:16:18.080437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.765  [2024-12-09 04:16:18.080834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.765  [2024-12-09 04:16:18.080886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.765  [2024-12-09 04:16:18.080921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.765  [2024-12-09 04:16:18.081176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.765  [2024-12-09 04:16:18.081401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.765  [2024-12-09 04:16:18.081429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.765  [2024-12-09 04:16:18.081441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.765  [2024-12-09 04:16:18.081453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.765  [2024-12-09 04:16:18.093595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.765  [2024-12-09 04:16:18.093962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.765  [2024-12-09 04:16:18.094005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.765  [2024-12-09 04:16:18.094021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.765  [2024-12-09 04:16:18.094300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.765  [2024-12-09 04:16:18.094525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.765  [2024-12-09 04:16:18.094545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.765  [2024-12-09 04:16:18.094558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.765  [2024-12-09 04:16:18.094570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.765  [2024-12-09 04:16:18.106645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.765  [2024-12-09 04:16:18.107021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.765  [2024-12-09 04:16:18.107064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.765  [2024-12-09 04:16:18.107086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.765  [2024-12-09 04:16:18.107369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.765  [2024-12-09 04:16:18.107578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.765  [2024-12-09 04:16:18.107597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.765  [2024-12-09 04:16:18.107610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.765  [2024-12-09 04:16:18.107622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.765  [2024-12-09 04:16:18.119842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.765  [2024-12-09 04:16:18.120258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.765  [2024-12-09 04:16:18.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.765  [2024-12-09 04:16:18.120331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.765  [2024-12-09 04:16:18.120578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.765  [2024-12-09 04:16:18.120774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.765  [2024-12-09 04:16:18.120792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.765  [2024-12-09 04:16:18.120804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.765  [2024-12-09 04:16:18.120815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.765  [2024-12-09 04:16:18.132987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.765  [2024-12-09 04:16:18.133383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.133411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.133427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.133652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.133881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.133900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.133912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.133923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.146134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.146570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.146612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.146628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.146871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.147086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.147105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.147117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.147128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.159490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.159851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.159879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.159895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.160127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.160392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.160413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.160426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.160438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.172514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.172914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.172941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.172956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.173182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.173439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.173459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.173471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.173483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.185587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.186078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.186120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.186137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.186410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.186650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.186669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.186680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.186696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.198683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.199177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.199203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.199233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.199460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.199702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.199720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.199732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.199744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.211854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.212258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.212308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.212324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.212556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.212819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.212840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.212854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.212867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.225132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.225652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.225680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.225711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.225963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.226158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.226176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.226188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.226199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.238381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.238834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.238862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.238892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.239135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.239391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.239411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.239424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.766  [2024-12-09 04:16:18.239436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.766  [2024-12-09 04:16:18.251659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.766  [2024-12-09 04:16:18.252071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.766  [2024-12-09 04:16:18.252097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.766  [2024-12-09 04:16:18.252127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.766  [2024-12-09 04:16:18.252376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.766  [2024-12-09 04:16:18.252578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.766  [2024-12-09 04:16:18.252611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.766  [2024-12-09 04:16:18.252624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.767  [2024-12-09 04:16:18.252635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.767  [2024-12-09 04:16:18.264756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.767  [2024-12-09 04:16:18.265119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.767  [2024-12-09 04:16:18.265145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.767  [2024-12-09 04:16:18.265160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.767  [2024-12-09 04:16:18.265407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.767  [2024-12-09 04:16:18.265625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.767  [2024-12-09 04:16:18.265644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.767  [2024-12-09 04:16:18.265671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.767  [2024-12-09 04:16:18.265683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.767  [2024-12-09 04:16:18.277996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.767  [2024-12-09 04:16:18.278392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.767  [2024-12-09 04:16:18.278420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.767  [2024-12-09 04:16:18.278441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.767  [2024-12-09 04:16:18.278690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.767  [2024-12-09 04:16:18.278903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.767  [2024-12-09 04:16:18.278921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.767  [2024-12-09 04:16:18.278933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.767  [2024-12-09 04:16:18.278944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.767  [2024-12-09 04:16:18.291078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.767  [2024-12-09 04:16:18.291466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.767  [2024-12-09 04:16:18.291508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.767  [2024-12-09 04:16:18.291524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.767  [2024-12-09 04:16:18.291750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.767  [2024-12-09 04:16:18.291962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.767  [2024-12-09 04:16:18.291980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.767  [2024-12-09 04:16:18.291992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.767  [2024-12-09 04:16:18.292004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.767  [2024-12-09 04:16:18.304137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.767  [2024-12-09 04:16:18.304511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.767  [2024-12-09 04:16:18.304554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.767  [2024-12-09 04:16:18.304569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.767  [2024-12-09 04:16:18.304825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.767  [2024-12-09 04:16:18.305036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.767  [2024-12-09 04:16:18.305054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.767  [2024-12-09 04:16:18.305066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.767  [2024-12-09 04:16:18.305077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.767  [2024-12-09 04:16:18.317301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.767  [2024-12-09 04:16:18.317800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.767  [2024-12-09 04:16:18.317841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.767  [2024-12-09 04:16:18.317857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.767  [2024-12-09 04:16:18.318103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.767  [2024-12-09 04:16:18.318325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.767  [2024-12-09 04:16:18.318350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.767  [2024-12-09 04:16:18.318363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.767  [2024-12-09 04:16:18.318374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:49.767  [2024-12-09 04:16:18.330409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:49.767  [2024-12-09 04:16:18.330899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:49.767  [2024-12-09 04:16:18.330925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:49.767  [2024-12-09 04:16:18.330956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:49.767  [2024-12-09 04:16:18.331202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:49.767  [2024-12-09 04:16:18.331432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:49.767  [2024-12-09 04:16:18.331453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:49.767  [2024-12-09 04:16:18.331466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:49.767  [2024-12-09 04:16:18.331478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  [2024-12-09 04:16:18.343712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026  [2024-12-09 04:16:18.344195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.344248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.344265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026  [2024-12-09 04:16:18.344492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026  [2024-12-09 04:16:18.344743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.026  [2024-12-09 04:16:18.344761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.026  [2024-12-09 04:16:18.344774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.026  [2024-12-09 04:16:18.344786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  [2024-12-09 04:16:18.356808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026  [2024-12-09 04:16:18.357302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.357329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.357360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026  [2024-12-09 04:16:18.357612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026  [2024-12-09 04:16:18.357823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.026  [2024-12-09 04:16:18.357841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.026  [2024-12-09 04:16:18.357853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.026  [2024-12-09 04:16:18.357868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  [2024-12-09 04:16:18.369884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026  [2024-12-09 04:16:18.370248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.370296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.370313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026  [2024-12-09 04:16:18.370558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026  [2024-12-09 04:16:18.370790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.026  [2024-12-09 04:16:18.370808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.026  [2024-12-09 04:16:18.370820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.026  [2024-12-09 04:16:18.370831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  [2024-12-09 04:16:18.383073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026  [2024-12-09 04:16:18.383536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.383583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.383599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026  [2024-12-09 04:16:18.383843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026  [2024-12-09 04:16:18.384038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.026  [2024-12-09 04:16:18.384056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.026  [2024-12-09 04:16:18.384069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.026  [2024-12-09 04:16:18.384080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  [2024-12-09 04:16:18.396204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026  [2024-12-09 04:16:18.396646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.396674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.396690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026  [2024-12-09 04:16:18.396933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026  [2024-12-09 04:16:18.397129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.026  [2024-12-09 04:16:18.397147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.026  [2024-12-09 04:16:18.397159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.026  [2024-12-09 04:16:18.397170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  [2024-12-09 04:16:18.409234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026  [2024-12-09 04:16:18.409583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.409610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.409625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026  [2024-12-09 04:16:18.409851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026  [2024-12-09 04:16:18.410047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.026  [2024-12-09 04:16:18.410065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.026  [2024-12-09 04:16:18.410077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.026  [2024-12-09 04:16:18.410089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  [2024-12-09 04:16:18.422316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026  [2024-12-09 04:16:18.422743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.422802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.422817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026  [2024-12-09 04:16:18.423055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026  [2024-12-09 04:16:18.423265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.026  [2024-12-09 04:16:18.423369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.026  [2024-12-09 04:16:18.423383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.026  [2024-12-09 04:16:18.423395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.026  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 341089 Killed                  "${NVMF_APP[@]}" "$@"
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.026  [2024-12-09 04:16:18.435902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=342045
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 342045
00:25:50.026  [2024-12-09 04:16:18.436311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.026  [2024-12-09 04:16:18.436341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.026  [2024-12-09 04:16:18.436357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 342045 ']'
00:25:50.026  [2024-12-09 04:16:18.436588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.026   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:50.027  [2024-12-09 04:16:18.436813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.436833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.436849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.436862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:50.027   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:50.027  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:50.027   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:50.027   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.027  [2024-12-09 04:16:18.449245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.449676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.449718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.449733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.449982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.450193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.450212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.450224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.450235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.462526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.462997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.463050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.463066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.463326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.463549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.463570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.463583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.463596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.475859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.476299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.476327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.476348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.476583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.476799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.476818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.476831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.476843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.488162] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:25:50.027  [2024-12-09 04:16:18.488238] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:50.027  [2024-12-09 04:16:18.489237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.489612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.489640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.489656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.489881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.490093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.490112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.490124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.490136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.502918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.503408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.503438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.503454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.503708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.503911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.503930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.503942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.503954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.516500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.516960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.517007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.517023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.517307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.517522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.517543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.517557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.517576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.529876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.530251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.530286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.530304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.530520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.530765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.530784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.530796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.530808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.543342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.543744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.543787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.543802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.544060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.027  [2024-12-09 04:16:18.544314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.027  [2024-12-09 04:16:18.544353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.027  [2024-12-09 04:16:18.544367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.027  [2024-12-09 04:16:18.544380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.027  [2024-12-09 04:16:18.556780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.027  [2024-12-09 04:16:18.557129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.027  [2024-12-09 04:16:18.557157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.027  [2024-12-09 04:16:18.557174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.027  [2024-12-09 04:16:18.557437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.028  [2024-12-09 04:16:18.557659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.028  [2024-12-09 04:16:18.557678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.028  [2024-12-09 04:16:18.557690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.028  [2024-12-09 04:16:18.557702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.028  [2024-12-09 04:16:18.563542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:50.028  [2024-12-09 04:16:18.570160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.028  [2024-12-09 04:16:18.570595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.028  [2024-12-09 04:16:18.570628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.028  [2024-12-09 04:16:18.570646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.028  [2024-12-09 04:16:18.570894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.028  [2024-12-09 04:16:18.571098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.028  [2024-12-09 04:16:18.571117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.028  [2024-12-09 04:16:18.571131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.028  [2024-12-09 04:16:18.571145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.028  [2024-12-09 04:16:18.583584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.028  [2024-12-09 04:16:18.584129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.028  [2024-12-09 04:16:18.584179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.028  [2024-12-09 04:16:18.584198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.028  [2024-12-09 04:16:18.584468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.028  [2024-12-09 04:16:18.584713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.028  [2024-12-09 04:16:18.584732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.028  [2024-12-09 04:16:18.584748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.028  [2024-12-09 04:16:18.584762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.028  [2024-12-09 04:16:18.597101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.028  [2024-12-09 04:16:18.597467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.028  [2024-12-09 04:16:18.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.028  [2024-12-09 04:16:18.597512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.028  [2024-12-09 04:16:18.597731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.028  [2024-12-09 04:16:18.597972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.028  [2024-12-09 04:16:18.598002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.028  [2024-12-09 04:16:18.598015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.028  [2024-12-09 04:16:18.598027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.285  [2024-12-09 04:16:18.610411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.285  [2024-12-09 04:16:18.610749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.285  [2024-12-09 04:16:18.610776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.285  [2024-12-09 04:16:18.610792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.285  [2024-12-09 04:16:18.611019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.285  [2024-12-09 04:16:18.611236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.285  [2024-12-09 04:16:18.611255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.285  [2024-12-09 04:16:18.611268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.285  [2024-12-09 04:16:18.611307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.285  [2024-12-09 04:16:18.622498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:50.285  [2024-12-09 04:16:18.622531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:50.285  [2024-12-09 04:16:18.622559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:50.285  [2024-12-09 04:16:18.622571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:50.285  [2024-12-09 04:16:18.622592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:50.285  [2024-12-09 04:16:18.623672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.285  [2024-12-09 04:16:18.624100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.285  [2024-12-09 04:16:18.624070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:25:50.285  [2024-12-09 04:16:18.624130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.285  [2024-12-09 04:16:18.624151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.285  [2024-12-09 04:16:18.624099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:25:50.285  [2024-12-09 04:16:18.624103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:50.285  [2024-12-09 04:16:18.624380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.285  [2024-12-09 04:16:18.624618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.285  [2024-12-09 04:16:18.624638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.285  [2024-12-09 04:16:18.624652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.285  [2024-12-09 04:16:18.624664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.285  [2024-12-09 04:16:18.637228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.285  [2024-12-09 04:16:18.637792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.285  [2024-12-09 04:16:18.637841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.637861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.638102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.638337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.638359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.638375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.638390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.650897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.651444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.651484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.651504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.651739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.651964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.651986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.652003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.652019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.664606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.665132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.665171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.665191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.665428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.665668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.665689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.665704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.665720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.678257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.678807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.678827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.679075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.679313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.679335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.679350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.679365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.691814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.692404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.692449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.692469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.692713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.692932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.692953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.692968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.692984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.705449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.706011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.706051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.706071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.706323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.706541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.706563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.706578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.706594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.718948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.719292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.719320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.719337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.719555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.719777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.719808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.719822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.719835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.732642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.732967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.733013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.733230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.733461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.733483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.733497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.733509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.286  [2024-12-09 04:16:18.746435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.746819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.746848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.746864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.747096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.747341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.747363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.747377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.747390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.759951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.760321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.760350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.760366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.760599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.760820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.760841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.760854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.760866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.286  [2024-12-09 04:16:18.773629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.773787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:50.286  [2024-12-09 04:16:18.774003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.774031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.774047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.774264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.774497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.774518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.774532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.774544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.286  [2024-12-09 04:16:18.787361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.787792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.787822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.787840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.788075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.788321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.788343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.788358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.788373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.800981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.801368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.801397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.801413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.801646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.801861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.801881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.801894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.801906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  [2024-12-09 04:16:18.814655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.815066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.815099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.815116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.815348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.815597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.815618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.815633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.815647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286  Malloc0
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.286  [2024-12-09 04:16:18.828194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286  [2024-12-09 04:16:18.828570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:50.286  [2024-12-09 04:16:18.828599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420
00:25:50.286  [2024-12-09 04:16:18.828616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set
00:25:50.286  [2024-12-09 04:16:18.828834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor
00:25:50.286  [2024-12-09 04:16:18.829081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:25:50.286  [2024-12-09 04:16:18.829103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:25:50.286  [2024-12-09 04:16:18.829117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:25:50.286  [2024-12-09 04:16:18.829130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:50.286  [2024-12-09 04:16:18.841382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:50.286  [2024-12-09 04:16:18.841925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:50.286   04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 341384
00:25:50.542       3533.50 IOPS,    13.80 MiB/s
[2024-12-09T03:16:19.118Z] [2024-12-09 04:16:18.992728] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful.
00:25:52.428       4048.57 IOPS,    15.81 MiB/s
[2024-12-09T03:16:21.933Z]      4567.50 IOPS,    17.84 MiB/s
[2024-12-09T03:16:23.304Z]      4986.00 IOPS,    19.48 MiB/s
[2024-12-09T03:16:24.236Z]      5309.40 IOPS,    20.74 MiB/s
[2024-12-09T03:16:25.169Z]      5570.64 IOPS,    21.76 MiB/s
[2024-12-09T03:16:26.100Z]      5792.83 IOPS,    22.63 MiB/s
[2024-12-09T03:16:27.032Z]      5987.85 IOPS,    23.39 MiB/s
[2024-12-09T03:16:27.964Z]      6153.64 IOPS,    24.04 MiB/s
[2024-12-09T03:16:27.964Z]      6301.80 IOPS,    24.62 MiB/s
00:25:59.388                                                                                                  Latency(us)
00:25:59.388  
[2024-12-09T03:16:27.964Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:59.388  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:25:59.388  	 Verification LBA range: start 0x0 length 0x4000
00:25:59.388  	 Nvme1n1             :      15.01    6300.85      24.61   10338.14     0.00    7668.20     807.06   20874.43
00:25:59.388  
[2024-12-09T03:16:27.964Z]  ===================================================================================================================
00:25:59.388  
[2024-12-09T03:16:27.964Z]  Total                       :               6300.85      24.61   10338.14     0.00    7668.20     807.06   20874.43
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:59.646  rmmod nvme_tcp
00:25:59.646  rmmod nvme_fabrics
00:25:59.646  rmmod nvme_keyring
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 342045 ']'
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 342045
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 342045 ']'
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 342045
00:25:59.646    04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname
00:25:59.646   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:59.646    04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342045
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342045'
00:25:59.905  killing process with pid 342045
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 342045
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 342045
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:59.905   04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:59.905    04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:02.443  
00:26:02.443  real	0m22.842s
00:26:02.443  user	0m59.761s
00:26:02.443  sys	0m4.873s
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:26:02.443  ************************************
00:26:02.443  END TEST nvmf_bdevperf
00:26:02.443  ************************************
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:02.443  ************************************
00:26:02.443  START TEST nvmf_target_disconnect
00:26:02.443  ************************************
00:26:02.443   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp
00:26:02.443  * Looking for test storage...
00:26:02.443  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-:
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-:
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<'
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:02.443     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:02.443  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:02.443  		--rc genhtml_branch_coverage=1
00:26:02.443  		--rc genhtml_function_coverage=1
00:26:02.443  		--rc genhtml_legend=1
00:26:02.443  		--rc geninfo_all_blocks=1
00:26:02.443  		--rc geninfo_unexecuted_blocks=1
00:26:02.443  		
00:26:02.443  		'
00:26:02.443    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:02.443  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:02.443  		--rc genhtml_branch_coverage=1
00:26:02.443  		--rc genhtml_function_coverage=1
00:26:02.443  		--rc genhtml_legend=1
00:26:02.443  		--rc geninfo_all_blocks=1
00:26:02.444  		--rc geninfo_unexecuted_blocks=1
00:26:02.444  		
00:26:02.444  		'
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:02.444  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:02.444  		--rc genhtml_branch_coverage=1
00:26:02.444  		--rc genhtml_function_coverage=1
00:26:02.444  		--rc genhtml_legend=1
00:26:02.444  		--rc geninfo_all_blocks=1
00:26:02.444  		--rc geninfo_unexecuted_blocks=1
00:26:02.444  		
00:26:02.444  		'
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:02.444  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:02.444  		--rc genhtml_branch_coverage=1
00:26:02.444  		--rc genhtml_function_coverage=1
00:26:02.444  		--rc genhtml_legend=1
00:26:02.444  		--rc geninfo_all_blocks=1
00:26:02.444  		--rc geninfo_unexecuted_blocks=1
00:26:02.444  		
00:26:02.444  		'
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:02.444     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:02.444     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:02.444     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob
00:26:02.444     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:02.444     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:02.444     04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:02.444      04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:02.444      04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:02.444      04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:02.444      04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH
00:26:02.444      04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:02.444  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:02.444    04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable
00:26:02.444   04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:26:04.349   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:04.349   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=()
00:26:04.349   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:04.349   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:04.349   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=()
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=()
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=()
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=()
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:26:04.350  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:26:04.350  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:26:04.350  Found net devices under 0000:0a:00.0: cvl_0_0
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:26:04.350  Found net devices under 0000:0a:00.1: cvl_0_1
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:04.350  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:04.350  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms
00:26:04.350  
00:26:04.350  --- 10.0.0.2 ping statistics ---
00:26:04.350  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:04.350  rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:04.350  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:04.350  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms
00:26:04.350  
00:26:04.350  --- 10.0.0.1 ping statistics ---
00:26:04.350  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:04.350  rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:04.350   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:26:04.351  ************************************
00:26:04.351  START TEST nvmf_target_disconnect_tc1
00:26:04.351  ************************************
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:04.351    04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:04.351    04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]]
00:26:04.351   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:26:04.623  [2024-12-09 04:16:32.957860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:04.623  [2024-12-09 04:16:32.957923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x799f40 with addr=10.0.0.2, port=4420
00:26:04.623  [2024-12-09 04:16:32.957957] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:26:04.623  [2024-12-09 04:16:32.957978] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:26:04.623  [2024-12-09 04:16:32.957993] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:26:04.623  spdk_nvme_probe() failed for transport address '10.0.0.2'
00:26:04.623  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred
00:26:04.623  Initializing NVMe Controllers
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:26:04.623  
00:26:04.623  real	0m0.095s
00:26:04.623  user	0m0.039s
00:26:04.623  sys	0m0.056s
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x
00:26:04.623  ************************************
00:26:04.623  END TEST nvmf_target_disconnect_tc1
00:26:04.623  ************************************
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:04.623   04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:26:04.623  ************************************
00:26:04.623  START TEST nvmf_target_disconnect_tc2
00:26:04.623  ************************************
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=345207
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 345207
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 345207 ']'
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:04.623  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:04.623   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.623  [2024-12-09 04:16:33.076611] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:26:04.623  [2024-12-09 04:16:33.076701] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:04.623  [2024-12-09 04:16:33.148022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:04.881  [2024-12-09 04:16:33.204052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:04.881  [2024-12-09 04:16:33.204109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:04.881  [2024-12-09 04:16:33.204132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:04.881  [2024-12-09 04:16:33.204142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:04.881  [2024-12-09 04:16:33.204151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:04.881  [2024-12-09 04:16:33.205719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:26:04.881  [2024-12-09 04:16:33.205783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:26:04.881  [2024-12-09 04:16:33.205888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:26:04.881  [2024-12-09 04:16:33.205897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.881  Malloc0
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.881  [2024-12-09 04:16:33.396581] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.881  [2024-12-09 04:16:33.424886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=345231
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2
00:26:04.881   04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:26:07.424   04:16:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 345207
00:26:07.424   04:16:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2
00:26:07.424  Read completed with error (sct=0, sc=8)
00:26:07.424  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  [2024-12-09 04:16:35.451082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  [2024-12-09 04:16:35.451470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Write completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  Read completed with error (sct=0, sc=8)
00:26:07.425  starting I/O failed
00:26:07.425  [2024-12-09 04:16:35.451809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:07.425  [2024-12-09 04:16:35.452005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.425  [2024-12-09 04:16:35.452057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.425  qpair failed and we were unable to recover it.
00:26:07.425  [2024-12-09 04:16:35.452198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.425  [2024-12-09 04:16:35.452233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.425  qpair failed and we were unable to recover it.
00:26:07.425  [2024-12-09 04:16:35.452371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.425  [2024-12-09 04:16:35.452399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.425  qpair failed and we were unable to recover it.
00:26:07.425  [2024-12-09 04:16:35.452482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.452509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.452667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.452792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.452820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.452920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.452947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.453060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.453086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.453238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.453265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.453369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.453396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.453494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.453520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.453619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.453646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.453805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.453831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.453924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.453950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.454089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.454130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.454231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.454259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.454404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.454431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.454532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.454559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.454714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.454740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.454847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.454873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.455101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.455207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.455345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.455482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.455632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.455747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.455901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.455987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.456012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.456125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.456174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.456296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.456325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.456412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.456438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.456580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.456607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.456755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.456781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.456870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.456897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.457017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.457045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.457156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.457182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.457262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.457294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.457411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.457437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.457521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.457547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.457662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.426  [2024-12-09 04:16:35.457688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.426  qpair failed and we were unable to recover it.
00:26:07.426  [2024-12-09 04:16:35.457776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.457802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.457944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.457971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.458080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.458106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.458225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.458253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.458396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.458436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.458525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.458554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.458673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.458701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.458813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.458840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.458931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.458958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.459063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.459091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.459174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.459202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.459321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.459348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.459462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.459489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.459647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.459688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.459784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.459811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.459954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.459981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.460117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.460230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.460351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.460457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.460612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.460721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.460858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.460997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.461023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.461178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.461219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.461315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.461344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.461436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.461467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.461581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.461608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.461792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.461851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.461968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.461995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.462104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.462131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.462204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.462230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.462355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.462382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.462507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.462533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.462621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.462647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.427  [2024-12-09 04:16:35.462740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.427  [2024-12-09 04:16:35.462765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.427  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.462881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.462910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.463019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.463046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.463161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.463187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.463319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.463347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.463484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.463627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.463656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.463750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.463779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.463895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.463922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.464064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.464215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.464353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.464466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.464600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.464772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.464884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.464994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.465020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.465142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.465171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.465294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.465326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.465443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.465469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.465554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.465581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.465718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.465744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.465936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.465963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.466104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.466131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.466249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.466289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.466374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.466400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.466487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.466513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.466625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.466653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.466779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.466808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.466949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.467082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.467217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.467367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.467511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.467654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.467771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.467909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.467987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.428  [2024-12-09 04:16:35.468014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.428  qpair failed and we were unable to recover it.
00:26:07.428  [2024-12-09 04:16:35.468102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.468130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.468268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.468304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.468397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.468425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.468551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.468591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.468713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.468741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.468854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.468881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.468965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.468991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.469066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.469098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.469228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.469268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.469402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.469489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.469516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.469605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.469634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.469725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.469752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.469889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.469916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.470025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.470052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.470158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.470185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.470303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.470331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.470455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.470483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.470640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.470670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.470788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.470816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.470956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.470983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.471129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.471237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.471387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.471505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.471624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.471794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.471907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.471992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.472018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.472128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.472204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.472229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.472386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.472412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.429  qpair failed and we were unable to recover it.
00:26:07.429  [2024-12-09 04:16:35.472496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.429  [2024-12-09 04:16:35.472522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.472643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.472671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Read completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  Write completed with error (sct=0, sc=8)
00:26:07.430  starting I/O failed
00:26:07.430  [2024-12-09 04:16:35.472978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:07.430  [2024-12-09 04:16:35.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.473069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.473155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.473181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.473298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.473325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.473412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.473438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.473589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.473615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.473838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.473890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.473995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.474026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.474144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.474171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.474300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.474339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.474460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.474488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.474584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.474611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.474692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.474719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.474870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.474917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.475011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.475038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.475162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.475200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.475324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.475352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.475523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.475610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.475635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.475772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.475798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.475889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.475914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.476036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.476063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.476149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.476177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.476293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.476321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.476438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.476465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.476574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.430  [2024-12-09 04:16:35.476601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.430  qpair failed and we were unable to recover it.
00:26:07.430  [2024-12-09 04:16:35.476688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.476714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.476792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.476817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.476931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.476958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.477059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.477097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.477193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.477219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.477341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.477369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.477461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.477486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.477598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.477622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.477746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.477801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.477981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.478030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.478145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.478171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.478304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.478342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.478451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.478479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.478594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.478620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.478762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.478788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.478928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.478954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.479960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.479986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.480954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.480979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.481084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.481108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.481191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.481219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.481316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.481355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.481476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.481510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.481599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.481625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.431  [2024-12-09 04:16:35.481711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.431  [2024-12-09 04:16:35.481738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.431  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.481862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.481888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.482083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.482144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.482253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.482284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.482399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.482425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.482515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.482541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.482735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.482788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.482998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.483960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.483985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.484075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.484114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.484263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.484298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.484410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.484436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.484534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.484560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.484671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.484698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.484806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.484832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.484909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.484935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.485073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.485217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.485358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.485519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.485626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.485731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.485870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.485984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.486096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.486210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.486363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.486503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.486619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.486763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.432  qpair failed and we were unable to recover it.
00:26:07.432  [2024-12-09 04:16:35.486877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.432  [2024-12-09 04:16:35.486904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.487019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.487044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.487146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.487172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.487298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.487438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.487463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.487578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.487604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.487744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.487771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.487891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.487917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.488068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.488106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.488201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.488228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.488355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.488382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.488472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.488498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.488709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.488736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.488951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.489000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.489112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.489138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.489261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.489298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.489421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.489448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.489541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.489566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.489784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.489812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.489927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.489953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.490082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.490109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.490224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.490250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.490396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.490422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.490511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.490537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.490628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.490654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.490771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.490797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.490886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.490914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.491030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.491174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.491205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.491349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.491519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.491596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.491621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.491762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.491787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.491901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.491927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.492040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.492065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.492169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.492196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.492283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.492310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.433  [2024-12-09 04:16:35.492397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.433  [2024-12-09 04:16:35.492424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.433  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.492515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.492540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.492659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.492685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.492793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.492818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.492935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.492960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.493038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.493063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.493190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.493229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.493335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.493363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.493458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.493483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.493593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.493618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.493823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.493849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.493936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.493961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.494062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.494089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.494197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.494222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.494334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.494361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.494446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.494471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.494613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.494752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.494777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.494910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.494935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.495044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.495070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.495195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.495234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.495359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.495387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.495480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.495620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.495646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.495731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.495757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.495898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.495923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.496060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.496120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.496214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.496240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.496377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.496416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.496516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.496543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.434  [2024-12-09 04:16:35.496664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.434  [2024-12-09 04:16:35.496690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.434  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.496803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.496836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.496944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.496969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.497121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.497234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.497262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.497409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.497435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.497549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.497574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.497685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.497711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.497819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.497844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.497934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.497960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.498058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.498084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.498204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.498241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.498368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.498396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.498511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.498538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.498676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.498702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.498825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.498851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.498937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.498963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.499068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.499094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.499181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.499207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.499329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.499358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.499505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.499531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.499643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.499669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.499749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.499775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.499885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.499910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.500076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.500217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.500362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.500501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.500629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.500775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.500886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.500975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.501000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.501112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.501137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.501238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.501281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.501378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.501405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.501489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.435  [2024-12-09 04:16:35.501514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.435  qpair failed and we were unable to recover it.
00:26:07.435  [2024-12-09 04:16:35.501596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.501622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.501712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.501738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.501876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.501901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.502892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.502917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.503969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.503996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.504096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.504134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.504266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.504315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.504431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.504551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.504578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.504689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.504715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.504834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.504860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.504949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.504975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.505090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.505119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.505315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.505344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.505486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.505511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.505626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.505654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.505790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.505817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.505930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.505955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.506159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.506206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.506337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.506365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.506458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.506486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.436  [2024-12-09 04:16:35.506599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.436  [2024-12-09 04:16:35.506624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.436  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.506787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.506842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.507022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.507082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.507193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.507219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.507336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.507362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.507467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.507493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.507579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.507605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.507752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.507777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.507893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.507918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.508053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.508182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.508350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.508490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.508595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.508724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.508976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.509093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.509199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.509336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.509498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.509640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.509779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.509917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.510051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.510095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.510238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.510265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.510398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.510424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.510533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.510559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.510643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.510668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.510774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.510799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.510913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.511030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.511056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.511180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.511218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.511355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.511382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.511494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.511519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.511633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.511659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.511774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.437  [2024-12-09 04:16:35.511799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.437  qpair failed and we were unable to recover it.
00:26:07.437  [2024-12-09 04:16:35.511917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.511941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.512062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.512087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.512202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.512228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.512354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.512383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.512502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.512527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.512616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.512643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.512761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.512789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.512930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.512958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.513081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.513120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.513205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.513232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.513356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.513383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.513473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.513499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.513607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.513632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.513711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.513737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.513846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.513873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.514027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.514054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.514163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.514202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.514293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.514320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.514435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.514461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.514599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.514627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.514792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.514845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.514935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.514961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.515080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.515105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.515245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.515282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.515376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.515403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.515495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.515521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.515633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.515659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.515774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.515800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.515922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.515950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.516088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.516206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.516353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.516457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.516584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.516718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.516862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.516978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.517003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.517084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.438  [2024-12-09 04:16:35.517109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.438  qpair failed and we were unable to recover it.
00:26:07.438  [2024-12-09 04:16:35.517215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.517240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.517371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.517408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.517527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.517554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.517702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.517730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.517809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.517833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.517912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.517936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.518054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.518081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.518169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.518194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.518323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.518362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.518487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.518514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.518633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.518658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.518770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.518798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.518881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.518907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.519950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.519975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.520967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.520995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.521107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.521134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.521235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.439  [2024-12-09 04:16:35.521280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.439  qpair failed and we were unable to recover it.
00:26:07.439  [2024-12-09 04:16:35.521369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.521396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.521503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.521529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.521640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.521664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.521773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.521798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.521913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.522959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.522990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.523068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.523093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.523195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.523233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.523357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.523387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.523501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.523528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.523607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.523633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.523849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.523905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.524139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.524180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.524284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.524312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.524433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.524460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.524604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.524659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.524760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.524831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.524948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.524973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.525090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.525208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.525354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.525491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.525603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.525745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.525880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.525988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.526015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.440  [2024-12-09 04:16:35.526146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.440  [2024-12-09 04:16:35.526176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.440  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.526284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.526310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.526428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.526453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.526541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.526566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.526648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.526674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.526761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.526787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.526881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.526909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.527025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.527050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.527137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.527164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.527310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.527337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.527417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.527442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.527549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.527575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.527692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.527718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.527870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.527898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.528044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.528153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.528322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.528462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.528604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.528742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.528861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.528984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.529158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.529284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.529428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.529539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.529641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.529765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.529897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.529923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.530033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.530057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.530200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.530229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.530357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.530384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.530511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.530539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.530656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.530684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.530797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.530832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.441  qpair failed and we were unable to recover it.
00:26:07.441  [2024-12-09 04:16:35.530946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.441  [2024-12-09 04:16:35.530971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.531096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.531124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.531252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.531301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.531419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.531448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.531561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.531595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.531677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.531705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.531803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.531841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.531947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.531983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.532122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.532151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.532260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.532296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.532435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.532463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.532561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.532587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.532729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.532756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.532836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.532861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.532969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.532995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.533079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.533104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.533224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.533252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.533351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.533377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.533494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.533521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.533628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.533654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.533824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.533895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.534079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.534136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.534290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.534327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.534440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.534465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.534576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.534609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.534725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.534752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.534875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.534937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.535080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.535107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.535191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.535217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.535308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.535335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.535423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.535449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.535590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.535618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.535735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.535763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.535907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.535934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.536054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.536081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.536189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.536217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.536440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.442  [2024-12-09 04:16:35.536482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.442  qpair failed and we were unable to recover it.
00:26:07.442  [2024-12-09 04:16:35.536605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.536634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.536759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.536788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.536928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.536956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.537097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.537124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.537226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.537267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.537395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.537424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.537527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.537567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.537694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.537722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.537836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.537865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.537957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.537984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.538097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.538124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.538205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.538231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.538311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.538338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.538454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.538480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.538596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.538628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.538827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.538854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.539068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.539120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.539258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.539291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.539404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.539436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.539522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.539548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.539639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.539665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.539775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.539802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.539924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.539952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.540067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.540095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.540176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.540203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.540385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.540420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.540565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.540592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.540710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.540738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.540864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.540891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.541027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.541065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.541186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.541215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.541340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.541370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.541511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.541538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.541675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.541702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.541783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.541809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.541926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.541956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.542049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.542074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.542186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.443  [2024-12-09 04:16:35.542213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.443  qpair failed and we were unable to recover it.
00:26:07.443  [2024-12-09 04:16:35.542300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.542327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.542455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.542495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.542621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.542649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.542798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.542863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.543078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.543192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.543313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.543425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.543545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.543708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.543903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.543986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.544131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.544249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.544369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.544587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.544721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.544835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.544948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.544974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.545132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.545176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.545283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.545321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.545421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.545448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.545582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.545672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.545697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.545880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.545923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.546073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.546126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.546320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.546348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.546469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.546497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.546612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.546638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.546809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.546837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.546928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.546953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.547092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.547118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.547240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.547292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.444  [2024-12-09 04:16:35.547417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.444  [2024-12-09 04:16:35.547447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.444  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.547573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.547602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.547731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.547759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.547866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.547893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.547974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.548090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.548253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.548376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.548498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.548643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.548761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.548868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.548893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.549038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.549094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.549204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.549229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.549336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.549376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.549498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.549526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.549637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.549663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.549777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.549803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.549945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.549972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.550139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.550253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.550361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.550467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.550607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.550722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.550861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.550979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.551006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.551231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.551258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.551342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.551368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.551506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.551533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.551618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.551643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.551733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.551759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.551873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.551901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.551996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.552037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.552127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.552155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.552280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.552309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.552507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.552534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.552660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.552689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.445  qpair failed and we were unable to recover it.
00:26:07.445  [2024-12-09 04:16:35.552769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.445  [2024-12-09 04:16:35.552795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.552920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.552949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.553059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.553164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.553204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.553329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.553358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.553447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.553473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.553559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.553586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.553729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.553757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.553896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.553923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.554092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.554150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.554269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.554309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.554400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.554426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.554534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.554566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.554709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.554736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.554827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.554853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.554995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.555836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.555964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.556005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.556128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.556158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.556282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.556311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.556428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.556455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.556569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.556597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.556737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.556766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.556904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.556931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.557018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.557044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.557160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.557189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.557330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.557358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.557444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.557471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.557598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.557639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.557800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.557828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.557918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.557945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.558061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.446  [2024-12-09 04:16:35.558089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.446  qpair failed and we were unable to recover it.
00:26:07.446  [2024-12-09 04:16:35.558210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.558237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.558383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.558411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.558493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.558519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.558666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.558715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.558799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.558823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.558958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.558984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.559103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.559246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.559278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.559363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.559389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.559495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.559519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.559646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.559672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.559791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.559817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.559929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.559956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.560088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.560129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.560217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.560243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.560382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.560423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.560548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.560577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.560691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.560718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.560805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.560831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.560973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.561000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.561161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.561201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.561303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.561329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.561445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.561472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.561616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.561643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.561802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.561866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.561948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.561973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.562086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.562113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.562226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.562253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.562459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.562489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.562633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.562668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.562758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.562784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.562970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.563016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.563135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.563160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.563265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.563314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.563414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.563441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.563556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.563584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.447  [2024-12-09 04:16:35.563670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.447  [2024-12-09 04:16:35.563696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.447  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.563803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.563830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.563952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.563978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.564073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.564099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.564216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.564242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.564344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.564375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.564492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.564517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.564661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.564687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.564827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.564854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.564959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.564984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.565104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.565131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.565282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.565310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.565402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.565428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.565507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.565532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.565653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.565681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.565764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.565789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.565875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.565902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.566184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.566328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.566468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.566583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.566730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.566875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.566989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.567873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.567996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.448  [2024-12-09 04:16:35.568023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.448  qpair failed and we were unable to recover it.
00:26:07.448  [2024-12-09 04:16:35.568144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.568171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.568312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.568339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.568430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.568454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.568543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.568568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.568647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.568672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.568756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.568781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.568866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.568892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.569933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.569960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.570072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.570099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.570210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.570238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.570322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.570466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.570493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.570579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.570604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.570720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.570746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.570885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.570912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.571054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.571081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.571173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.571199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.571346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.571502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.571531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.571634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.571663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.571771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.571798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.571887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.571916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.572036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.572063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.572178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.572206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.572294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.572322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.572510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.572576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.572864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.572931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.449  qpair failed and we were unable to recover it.
00:26:07.449  [2024-12-09 04:16:35.573229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.449  [2024-12-09 04:16:35.573308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.573525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.573581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.573872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.573938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.574232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.574326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.574438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.574464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.574630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.574710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.574972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.574999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.575207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.575292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.575475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.575619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.575644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.575760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.575787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.575905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.575932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.576050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.576076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.576293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.576343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.576483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.576511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.576678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.576742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.576973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.577038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.577285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.577339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.577536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.577564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.577686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.577714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.577825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.577852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.577964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.577993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.578078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.578104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.578185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.578211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.578354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.578382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.578496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.578522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.578628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.578655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.578791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.578880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.578951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.579135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.579163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.579278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.579306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.579425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.579498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.579736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.579801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.580097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.580162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.580465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.580492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.580605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.580632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.580795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.580860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.450  [2024-12-09 04:16:35.581146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.450  [2024-12-09 04:16:35.581211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.450  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.581480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.581549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.581842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.581909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.582151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.582218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.582545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.582613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.582918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.582982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.583232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.583321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.583614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.583641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.583772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.583803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.583987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.584336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.584365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.584487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.584649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.584677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.584846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.584873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.584956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.584981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.585075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.585102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.585209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.585237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.585329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.585355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.585508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.585574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.585815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.585842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.585949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.585976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.586055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.586107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.586371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.586437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.586654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.586721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.587018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.587083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.587389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.587685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.587750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.587956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.588020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.588259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.588340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.588626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.588691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.588949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.589016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.589285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.589351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.589724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.589977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.590042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.590327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.590395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.590768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.591060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.591126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.451  [2024-12-09 04:16:35.591418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.451  [2024-12-09 04:16:35.591485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.451  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.591772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.591838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.592130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.592156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.592283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.592309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.592472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.592500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.592595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.592622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.592739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.592766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.592857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.592882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.592994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.593021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.593136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.593162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.593293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.593320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.593458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.593489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.593609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.593660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.593853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.593921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.594212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.594292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.594583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.594647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.594834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.594900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.595196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.595261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.595529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.595594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.595779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.595848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.596147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.596214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.596485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.596552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.596799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.596867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.597160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.597226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.597492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.597558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.597820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.597887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.598128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.598195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.598465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.598537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.598754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.598823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.599128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.599193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.599457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.599525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.599775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.599841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.600127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.600194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.600454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.600506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.600659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.600697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.600976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.601042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.601311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.601378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.601639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.601666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.452  qpair failed and we were unable to recover it.
00:26:07.452  [2024-12-09 04:16:35.601801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.452  [2024-12-09 04:16:35.601828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.601963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.601990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.602128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.602155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.602370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.602440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.602739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.602804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.603100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.603165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.603424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.603492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.603802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.604095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.604161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.604446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.604515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.604819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.604885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.605177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.605204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.605370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.605434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.605730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.605761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.605855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.605881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.606000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.606027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.606140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.606168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.606365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.606432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.606715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.606780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.607035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.607104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.607367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.607394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.607530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.607557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.607817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.607881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.608187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.608252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.608491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.608557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.608805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.608870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.609066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.609108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.609227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.609255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.609382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.609408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.609496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.609523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.609684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.609750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.610040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.610105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.453  qpair failed and we were unable to recover it.
00:26:07.453  [2024-12-09 04:16:35.610398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.453  [2024-12-09 04:16:35.610465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.610746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.610810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.611070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.611138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.611337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.611407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.611702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.611767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.612069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.612134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.612433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.612500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.612789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.612854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.613157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.613223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.613491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.613559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.613855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.613920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.614124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.614192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.614532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.614599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.614840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.614908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.615210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.615292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.615588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.615655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.615949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.616015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.616307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.616373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.616560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.616910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.616976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.617267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.617347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.617648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.617724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.617976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.618040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.618344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.618412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.618668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.618734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.619025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.619091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.619385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.619452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.619753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.619818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.620125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.620189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.620452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.620520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.620822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.620887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.621184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.621250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.621528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.621593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.621851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.621917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.622219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.622314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.622619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.622695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.622990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.454  [2024-12-09 04:16:35.623054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.454  qpair failed and we were unable to recover it.
00:26:07.454  [2024-12-09 04:16:35.623351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.623419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.623676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.623741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.623999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.624063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.624350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.624417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.624668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.624733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.624988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.625052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.625242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.625326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.625575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.625644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.625895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.625960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.626202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.626269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.626571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.626637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.626898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.626966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.627212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.627304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.627595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.627661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.627960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.628025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.628291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.628369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.628667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.629019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.629085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.629325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.629393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.629646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.629711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.629967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.630031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.630293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.630360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.630578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.630646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.630917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.630983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.631332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.631410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.631663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.631730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.632027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.632092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.632383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.632450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.632696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.632763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.633058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.633122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.633474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.633785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.633852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.634108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.634172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.634422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.634488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.634785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.634852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.635098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.635166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.455  [2024-12-09 04:16:35.635431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.455  [2024-12-09 04:16:35.635501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.455  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.635807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.635873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.636102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.636169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.636476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.636544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.636836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.637147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.637213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.637519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.637586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.637835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.637899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.638191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.638257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.638572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.638637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.638887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.638954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.639223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.639318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.639607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.639673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.639906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.639970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.640216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.640305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.640616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.640683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.640975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.641039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.641308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.641376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.641666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.641732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.641971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.642035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.642294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.642363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.642555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.642621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.642875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.642941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.643178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.643242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.643514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.643580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.643869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.643934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.644222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.644302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.644601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.644668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.644951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.645027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.645293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.645361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.645620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.645686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.645970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.646034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.646327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.646395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.646659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.646725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.647026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.647091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.647345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.647412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.647666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.647731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.456  qpair failed and we were unable to recover it.
00:26:07.456  [2024-12-09 04:16:35.648016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.456  [2024-12-09 04:16:35.648081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.648383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.648449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.648698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.648765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.649061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.649126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.649427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.649493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.649728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.649793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.649977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.650043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.650324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.650392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.650606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.650672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.650921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.650989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.651232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.651326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.651630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.651696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.651948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.652012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.652266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.652348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.652592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.652659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.652943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.653008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.653213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.653293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.653584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.653658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.653928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.653995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.654266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.654348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.654542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.654612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.654910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.655251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.655334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.655629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.655693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.655998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.656063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.656365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.656432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.656679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.656746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.657047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.657122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.657424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.657491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.657780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.658141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.658207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.658471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.658559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.658797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.659154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.659219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.659495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.659562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.659855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.659920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.660108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.457  [2024-12-09 04:16:35.660171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.457  qpair failed and we were unable to recover it.
00:26:07.457  [2024-12-09 04:16:35.660462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.660527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.660779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.660846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.661135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.661199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.661508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.661574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.661822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.661890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.662083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.662150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.662426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.662492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.662755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.662820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.663075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.663141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.663435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.663502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.663788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.663855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.664152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.664217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.664500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.664566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.664757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.664825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.665073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.665141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.665392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.665461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.665759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.665824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.666067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.666133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.666320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.666386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.666613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.666677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.666934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.667002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.667226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.667319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.667535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.667601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.667895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.667960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.668257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.668355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.668645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.668710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.668952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.669017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.669317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.669384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.669682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.669749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.670035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.670099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.670326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.670393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.670647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.458  [2024-12-09 04:16:35.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.458  [2024-12-09 04:16:35.671073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.458  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.671330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.671397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.671683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.671758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.672067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.672131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.672377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.672443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.672657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.672722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.673014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.673079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.673372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.673440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.673734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.673799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.674065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.674129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.674420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.674486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.674785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.674850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.675153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.675218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.675490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.675558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.675814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.675880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.676167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.676231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.676482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.676547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.676812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.676878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.677178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.677243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.677450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.677515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.677763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.677829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.678082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.678146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.678418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.678453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.678608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.678644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.678781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.678817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.678949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.678984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.679154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.679220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.679483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.679582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.679884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.679953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.680268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.680358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.680627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.680696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.680996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.681061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.681362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.681428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.681716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.681781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.682065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.682130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.682456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.682753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.682819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.683071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.459  [2024-12-09 04:16:35.683135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.459  qpair failed and we were unable to recover it.
00:26:07.459  [2024-12-09 04:16:35.683415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.683481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.683781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.683845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.684138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.684202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.684441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.684471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.684623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.684654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.684772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.684802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.684914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.684945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.685230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.685259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.685358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.685386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.685532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.685592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.685781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.685863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.686063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.686122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.686378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.686408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.686536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.686584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.686706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.686734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.686886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.686915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.687030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.687064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.687229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.687353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.687387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.687488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.687517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.687630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.687663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.687807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.687841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.688070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.688134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.688261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.688299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.688452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.688481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.688607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.688636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.688825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.688889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.689118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.689182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.689406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.689435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.689538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.689567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.689757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.689840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.690046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.690123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.690285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.690334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.690447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.690476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.690592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.690621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.690721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.460  [2024-12-09 04:16:35.690750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.460  qpair failed and we were unable to recover it.
00:26:07.460  [2024-12-09 04:16:35.690890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.690923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.691062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.691093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.691237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.691266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.691485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.691515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.691607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.691636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.691796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.691864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.692139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.692359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.692389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.692516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.692545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.692734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.692768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.692974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.693038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.693281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.693328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.693448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.693477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.693595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.693627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.693754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.693812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.694106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.694172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.694391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.694420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.694545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.694575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.694743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.694805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.695108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.695340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.695370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.695495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.695523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.695609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.695638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.695768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.695797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.696066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.696129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.696362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.696392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.696483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.696512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.696608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.696637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.696766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.696831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.697085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.697150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.697381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.697411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.697527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.697556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.697820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.697885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.698179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.698243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.698418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.698447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.461  [2024-12-09 04:16:35.698582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.461  [2024-12-09 04:16:35.698624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.461  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.698749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.698781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.698931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.699000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.699227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.699321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.699416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.699445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.699555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.699633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.699846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.699911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.700146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.700211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.700417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.700447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.700538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.700585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.700686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.700717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.700870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.700900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.701019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.701047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.701216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.701302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.701428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.701457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.701567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.701613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.701769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.701806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.701960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.702040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.702213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.702243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.702367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.702519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.702585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.702722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.702752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.702849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.702878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.702959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.702987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.703139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.703167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.703292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.703329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.703476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.703507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.703650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.703680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.703795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.703849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.703981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.704023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.704154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.704184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.704287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.704323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.704510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.704690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.704771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.704998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.705042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.705171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.705204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.705318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.462  [2024-12-09 04:16:35.705352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.462  qpair failed and we were unable to recover it.
00:26:07.462  [2024-12-09 04:16:35.705466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.705505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.705718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.705788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.706090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.706121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.706250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.706463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.706502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.706653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.706735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.706981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.707033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.707443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.707477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.707702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.707760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.707954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.708015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.708175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.708205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.708410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.708440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.708699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.708730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.708949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.709003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.709178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.709208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.709341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.709371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.709474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.709509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.709628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.709664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.709891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.709961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.710127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.710160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.710301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.710335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.710478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.710527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.710715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.710784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.710913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.710969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.711099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.711127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.711267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.711380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.711409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.711523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.711552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.711733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.711765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.711888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.711917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.712095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.712226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.712373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.712404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.712510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.712540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.712694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.712741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.712862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.712898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.463  [2024-12-09 04:16:35.713017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.463  [2024-12-09 04:16:35.713046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.463  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.713195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.713225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.713459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.713495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.713837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.714102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.714235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.714263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.714391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.714426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.714541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.714593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.714755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.714792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.715050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.715120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.715313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.715344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.715452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.715482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.715645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.715690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.715826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.715859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.716045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.716080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.716215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.716247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.716391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.716419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.716595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.716629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.716955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.717035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.717179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.717212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.717344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.717375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.717485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.717518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.717681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.717735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.718055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.718122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.718373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.718404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.718498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.718537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.718714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.718752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.718976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.719040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.719190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.719222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.719337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.719366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.719490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.719518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.719657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.719705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.719863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.719935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.720044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.720075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.720197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.720224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.720346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.720382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.720481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.720509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.720636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.720667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.720826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.720859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.720991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.464  [2024-12-09 04:16:35.721037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.464  qpair failed and we were unable to recover it.
00:26:07.464  [2024-12-09 04:16:35.721220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.721250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.721392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.721422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.721574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.721602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.721721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.721767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.721873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.721905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.722039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.722072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.722244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.722281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.722425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.722458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.722568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.722599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.722719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.722751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.722914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.722948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.723150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.723223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.723344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.723374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.723527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.723566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.723675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.723708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.723921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.723988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.724267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.724427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.724457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.724610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.724830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.724881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.725039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.725074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.725241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.725277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.725383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.725438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.725603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.725653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.725832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.725863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.726107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.726228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.726260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.726414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.726462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.726637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.726669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.726892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.726945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.727097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.727127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.727251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.727291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.727413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.727443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.727594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.727624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.727753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.727877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.727907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.728069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.728118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.728249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.728296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.728469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.728499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.465  [2024-12-09 04:16:35.728583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.465  [2024-12-09 04:16:35.728613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.465  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.728796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.728832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.728999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.729029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.729152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.729183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.729330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.729365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.729508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.729555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.729794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.729824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.729953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.729982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.730139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.730187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.730302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.730344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.730448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.730477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.730599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.730629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.730771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.730816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.730937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.730977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.731137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.731176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.731315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.731349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.731475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.731505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.731635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.731666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.731787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.731817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.731978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.732009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.732136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.732169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.732279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.732307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.732461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.732490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.732644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.732698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.732824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.732871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.732976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.733003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.733142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.733173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.733298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.733328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.733454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.733483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.733609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.733639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.733762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.733791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.733888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.733917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.734042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.734072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.734177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.734206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.734302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.734341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.734495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.734542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.734660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.734690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.734787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.734814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.734934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.466  [2024-12-09 04:16:35.734965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.466  qpair failed and we were unable to recover it.
00:26:07.466  [2024-12-09 04:16:35.735095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.735124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.735240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.735269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.735439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.735469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.735616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.735645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.735764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.735791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.735915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.735944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.736038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.736068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.736210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.736250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.736419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.736468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.736587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.736632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.736797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.736841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.736968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.736999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.737129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.737177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.737300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.737337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.737471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.737503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.737645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.737678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.737788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.737819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.737984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.738016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.738157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.738204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.738315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.738343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.738441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.738470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.738587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.738616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.738755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.738787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.738875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.738905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.739038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.739089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.739253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.739306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.739444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.739477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.739633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.739665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.739772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.739806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.739935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.739988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.740103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.740135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.740248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.740286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.740401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.740431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.740586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.740618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.740784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.740823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.740963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.467  [2024-12-09 04:16:35.740994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.467  qpair failed and we were unable to recover it.
00:26:07.467  [2024-12-09 04:16:35.741160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.741191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.741311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.741344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.741481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.741510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.741680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.741712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.741871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.741903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.742030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.742061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.742218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.742249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.742381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.742411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.742534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.742579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.742741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.742788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.742909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.742954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.743059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.743087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.743193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.743223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.743383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.743414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.743506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.743541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.743651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.743687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.743810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.743840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.743970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.744000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.744107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.744139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.744284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.744324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.744454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.744484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.744651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.744702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.744844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.744891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.745074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.745105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.745224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.745255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.745426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.745456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.745570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.745603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.745761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.745792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.745918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.745949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.746097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.746128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.746254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.746327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.746420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.746452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.746597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.746628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.746754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.746784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.746881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.746910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.747038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.747068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.747189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.747224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.747364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.747396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.747526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.747556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.747651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.747680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.747784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.747828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.468  [2024-12-09 04:16:35.747961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.468  [2024-12-09 04:16:35.747992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.468  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.748113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.748148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.748248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.748286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.748435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.748463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.748557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.748584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.748704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.748732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.748823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.748850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.748975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.749005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.749157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.749186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.749292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.749325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.749448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.749477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.749600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.749633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.749779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.749808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.749936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.749966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.750065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.750092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.750231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.750260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.750395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.750429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.750520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.750564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.750691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.750720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.750841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.750872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.750995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.751025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.751154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.751183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.751294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.751325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.751486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.751516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.751609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.751637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.751748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.751776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.751875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.751905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.752054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.752084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.752177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.752205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.752354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.752382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.752503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.752542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.752635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.752663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.752755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.752779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.752886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.752912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.753911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.753943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.754050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.754077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.754199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.469  [2024-12-09 04:16:35.754228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.469  qpair failed and we were unable to recover it.
00:26:07.469  [2024-12-09 04:16:35.754327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.754354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.754494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.754521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.754635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.754663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.754777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.754809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.754907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.754935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.755919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.755946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.756060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.756088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.756203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.756230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.756361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.756389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.756510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.756544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.756714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.756828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.756856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.756999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.757026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.757141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.757168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.757292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.757326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.757451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.757478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.757598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.757627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.757719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.757748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.757865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.757892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.757976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.758002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.758116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.758146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.758266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.758301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.758446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.758472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.758613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.758640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.758757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.758783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.758890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.758917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.759920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.759945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.760026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.760051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.760160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.760186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.760334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.470  [2024-12-09 04:16:35.760441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.470  [2024-12-09 04:16:35.760467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.470  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.760572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.760597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.760733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.760759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.760874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.760900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.760984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.761083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.761221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.761400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.761566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.761677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.761794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.761904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.761929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.762042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.762068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.762155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.762180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.762293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.762319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.762394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.762419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.762538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.762563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.762679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.762706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.762820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.762846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.763003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.763045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.763179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.763209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.763330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.763359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.763470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.763496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.763612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.763639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.763747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.763775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.763898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.763925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.764951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.764983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.765126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.765153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.765242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.765268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.765365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.765396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.765488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.765513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.765662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.765689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.765832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.765859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.765942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.765969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.766058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.766089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.471  qpair failed and we were unable to recover it.
00:26:07.471  [2024-12-09 04:16:35.766199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.471  [2024-12-09 04:16:35.766227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.766357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.766391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.766476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.766501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.766645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.766672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.766756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.766782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.766874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.766904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.767973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.767998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.768079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.768103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.768216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.768242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.768362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.768388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.768467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.768492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.768604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.768630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.768737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.768764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.768871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.768897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.769940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.472  [2024-12-09 04:16:35.770079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.472  [2024-12-09 04:16:35.770105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.472  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.770211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.770238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.770384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.770415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.770498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.770523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.770700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.770767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.771035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.771066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.771318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.771346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.771444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.771470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.771590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.771617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.771757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.771784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.771930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.771996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.772251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.772297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.772407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.772432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.772570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.772596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.772733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.772776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.772890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.772962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.773231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.773260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.773427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.773456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.773572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.773607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.773873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.773906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.774323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.774472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.774498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.774624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.774651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.774769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.774796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.775036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.775071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.775213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.775248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.775422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.775449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.775567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.775595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.775784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.775851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.776173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.776240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.776400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.776428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.776545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.776572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.776835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.776902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.777147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.777213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.777414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.777442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.777557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.777585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.777702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.777729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.777823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.777910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.473  qpair failed and we were unable to recover it.
00:26:07.473  [2024-12-09 04:16:35.778140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.473  [2024-12-09 04:16:35.778206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.778433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.778461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.778552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.778607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.778804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.778872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.779173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.779251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.779427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.779457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.779601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.779644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.779793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.779833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.780020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.780098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.780374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.780402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.780501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.780532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.780623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.780649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.780791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.780819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.780996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.781062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.781329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.781358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.781443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.781467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.781549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.781576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.781900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.781935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.782142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.782220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.782428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.782455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.782553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.782580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.782662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.782687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.782771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.782796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.782907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.782998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.783267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.783302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.783429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.783463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.783542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.783566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.783705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.783732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.783834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.783907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.784096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.784167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.784401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.784429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.784554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.784601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.784738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.784773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.785006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.785072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.785305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.785359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.785457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.785482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.785603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.785629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.474  [2024-12-09 04:16:35.785747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.474  [2024-12-09 04:16:35.785812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.474  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.786043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.786070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.786358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.786386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.786495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.786525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.786616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.786640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.786769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.786836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.787090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.787161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.787414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.787446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.787568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.787613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.787790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.787839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.788019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.788329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.788357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.788446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.788470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.788615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.788651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.788873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.788940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.789164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.789238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.789444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.789471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.789669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.789697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.789845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.789880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.790052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.790115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.790399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.790435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.790580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.790615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.790708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.790749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.790918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.790950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.791058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.791083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.791208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.791249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.791527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.791601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.791862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.791893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.792019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.792046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.792166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.792194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.792343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.792375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.792500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.792528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.792633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.792661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.792781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.792810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.793033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.793100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.793323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.793351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.475  [2024-12-09 04:16:35.793471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.475  qpair failed and we were unable to recover it.
00:26:07.475  [2024-12-09 04:16:35.793555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.793581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.793727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.793755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.793948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.794015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.794186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.794247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.794382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.794424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.794696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.794736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.794857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.794895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.795153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.795188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.795324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.795361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.795504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.795539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.795683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.795750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.795869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.795907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.796074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.796130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.796315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.796356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.796470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.796510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.796774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.796809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.796950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.796985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.797148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.797188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.797444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.797486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.797657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.797696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.797901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.797969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.798226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.798262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.798420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.798457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.798558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.798591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.798685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.798864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.798893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.799050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.799094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.799243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.799284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.799450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.799484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.799660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.799730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.799945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.800012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.800258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.800301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.476  [2024-12-09 04:16:35.800435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.476  [2024-12-09 04:16:35.800468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.476  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.800622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.800657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.800816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.800988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.801023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.801151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.801183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.801348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.801413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.801505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.801534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.801627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.801656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.801798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.801830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.801954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.801986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.802107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826f30 is same with the state(6) to be set
00:26:07.477  [2024-12-09 04:16:35.802315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.802360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.802516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.802547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.802636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.802665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.802900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.802938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.803081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.803117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.803283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.803318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.803427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.803463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.803594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.803660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.803885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.803966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.804158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.804190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.804331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.804364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.804465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.804502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.804657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.804693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.804911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.804946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.805067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.805097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.805264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.805307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.805409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.805463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.805583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.805612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.805718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.805753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.805918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.806305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.806342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.806474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.806514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.806759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.806817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.806996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.807025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.807146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.807322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.807361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.477  [2024-12-09 04:16:35.807537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.477  [2024-12-09 04:16:35.807573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.477  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.807719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.807772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.807896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.807925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.808107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.808151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.808248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.808285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.808388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.808417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.808586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.808619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.808793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.808861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.809067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.809134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.809382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.809426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.809544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.809586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.809761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.809799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.809939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.809970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.810084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.810117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.810283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.810418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.810456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.810582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.810629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.810721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.810749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.810873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.810903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.811042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.811089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.811233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.811262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.811416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.811449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.811684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.811752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.812047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.812113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.812359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.812393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.812571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.812607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.812710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.812879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.812908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.813102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.813167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.813400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.813435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.813531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.813559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.813653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.813685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.813806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.813835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.813987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.814023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.814291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.814324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.814492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.814533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.478  qpair failed and we were unable to recover it.
00:26:07.478  [2024-12-09 04:16:35.814676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.478  [2024-12-09 04:16:35.814778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.815083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.815161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.815380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.815427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.815572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.815601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.815732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.815764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.815909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.816120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.816196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.816384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.816418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.816533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.816565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.816753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.816788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.816966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.817004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.817125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.817158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.817260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.817310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.817456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.817489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.817601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.817637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.817852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.817931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.818159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.818192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.818326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.818358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.818496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.818528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.818645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.818686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.818972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.819040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.819312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.819366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.819525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.819558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.819691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.819724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.819932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.819964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.820095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.820127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.820289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.820338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.820454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.820488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.820673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.820739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.821008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.821077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.821335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.821368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.821462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.821492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.821736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.821768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.822027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.822094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.822294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.822328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.822467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.822502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.822637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.822669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.822768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.479  [2024-12-09 04:16:35.822799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.479  qpair failed and we were unable to recover it.
00:26:07.479  [2024-12-09 04:16:35.822928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.822959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.823082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.823167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.823381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.823414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.823544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.823578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.823775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.824092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.824158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.824402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.824434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.824619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.824685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.824980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.825047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.825299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.825332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.825490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.825620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.825653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.825812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.825844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.825951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.825985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.826119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.826152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.826288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.826321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.826421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.826453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.826580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.826613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.826745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.826778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.826915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.826952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.827091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.827124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.827218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.827250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.827392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.827425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.827537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.827570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.827706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.827737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.827903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.827935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.828102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.828134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.828258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.828297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.828437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.828469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.828561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.828591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.828715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.828747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.480  qpair failed and we were unable to recover it.
00:26:07.480  [2024-12-09 04:16:35.828838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.480  [2024-12-09 04:16:35.828868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.828996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.829028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.829134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.829166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.829327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.829358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.829483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.829515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.829644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.829676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.829815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.829846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.829977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.830008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.830175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.830276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.830307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.830409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.830446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.830543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.830573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.830808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.830839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.830967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.830998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.831147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.831196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.831334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.831370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.831479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.831512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.831612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.831646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.831761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.831795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.831974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.832310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.832381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.832672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.832747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.833090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.833157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.833490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.833558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.833868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.833907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.834047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.834104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.834355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.834424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.834732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.834799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.835104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.835176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.835438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.835505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.835770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.835838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.836088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.836124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.836266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.836309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.836577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.836662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.836916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.836983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.837252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.837337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.837591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.837669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.837993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.481  [2024-12-09 04:16:35.838068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.481  qpair failed and we were unable to recover it.
00:26:07.481  [2024-12-09 04:16:35.838366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.838435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.838733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.838801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.839101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.839168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.839451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.839488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.839624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.839665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.839819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.839854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.840110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.840178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.840424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.840492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.840800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.840867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.841091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.841159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.841422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.841462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.841604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.841639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.841880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.841961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.842287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.842356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.842727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.843027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.843102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.843363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.843431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.843660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.843732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.843967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.844002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.844136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.844172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.844316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.844352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.844472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.844509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.844804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.844840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.845034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.845101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.845354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.845391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.845504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.845541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.845742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.845820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.846051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.846086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.846233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.846268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.846533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.846600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.846891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.846926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.847068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.847104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.847243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.847316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.847543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.847611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.847873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.847942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.848235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.848327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.848589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.848656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.482  [2024-12-09 04:16:35.848882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.482  [2024-12-09 04:16:35.848949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.482  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.849230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.849264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.849452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.849488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.849624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.849659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.849802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.849857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.850116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.850182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.850477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.850520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.850714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.850783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.851032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.851097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.851404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.851481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.851718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.851779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.851922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.851957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.852218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.852298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.852496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.852575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.852863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.852940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.853196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.853286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.853526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.853563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.853707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.853747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.854000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.854067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.854372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.854440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.854689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.854765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.855028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.855099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.855318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.855386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.855621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.855694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.855950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.856017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.856227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.856315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.856567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.856637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.856893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.856964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.857265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.857369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.857640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.857706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.857968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.858043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.858314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.858381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.858654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.858721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.859016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.859052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.859195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.859230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.859502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.859568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.859806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.483  [2024-12-09 04:16:35.859869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.483  qpair failed and we were unable to recover it.
00:26:07.483  [2024-12-09 04:16:35.860119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.860192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.860585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.860653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.860912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.860980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.861202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.861291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.861589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.861665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.861968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.862044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.862284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.862328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.862449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.862484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.862630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.862670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.862841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.863120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.863190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.863445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.863480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.863625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.863659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.863768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.863802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.863936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.863975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.864118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.864157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.864314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.864364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.864506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.864613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.864653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.864812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.864851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.865012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.865045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.865151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.865189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.865319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.865353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.865527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.865560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.865731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.865872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.865905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.866036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.866068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.866211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.866247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.866386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.866419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.866533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.866578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.866709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.866741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.866896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.866929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.867069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.867102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.867231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.867264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.867425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.867456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.867594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.867629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.867753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.867783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.484  [2024-12-09 04:16:35.867915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.484  [2024-12-09 04:16:35.867946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.484  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.868067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.868195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.868225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.868348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.868379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.868529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.868560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.868715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.868745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.868860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.868902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.869085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.869130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.869265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.869302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.869442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.869471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.869610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.869640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.869806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.869836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.869962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.869992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.870117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.870146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.870236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.870265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.870438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.870468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.870588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.870619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.870745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.870776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.870937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.870967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.871097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.871128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.871300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.871340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.871490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.871531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.871715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.871748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.871856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.871887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.871979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.872010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.872115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.872146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.872291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.872330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.872475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.872508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.872676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.872799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.872834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.872966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.873000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.873138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.873182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.873346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.873376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.485  [2024-12-09 04:16:35.873471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.485  [2024-12-09 04:16:35.873499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.485  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.873598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.873626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.873752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.873779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.873872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.873898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.874953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.874981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.875057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.875084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.875214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.875243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.875369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.875397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.875507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.875543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.875661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.875687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.875775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.875822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.876089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.876123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.876236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.876403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.876430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.876505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.876658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.876692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.876833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.876869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.877007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.877036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.877142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.877174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.877269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.877330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.877414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.877447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.877563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.877595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.877733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.877768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.877989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.878025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.878125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.878157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.878325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.878353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.878469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.878497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.878602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.878632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.878773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.878817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.878925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.878956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.879068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.879100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.486  qpair failed and we were unable to recover it.
00:26:07.486  [2024-12-09 04:16:35.879230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.486  [2024-12-09 04:16:35.879259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.879402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.879430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.879541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.879567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.879643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.879670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.879869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.879899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.880062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.880091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.880190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.880219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.880367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.880393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.880504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.880530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.880637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.880681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.880827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.880859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.881002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.881036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.881202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.881231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.881382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.881410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.881509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.881537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.881661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.881694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.881814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.881849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.882065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.882094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.882265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.882336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.882427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.882551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.882578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.882654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.882681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.882776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.882814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.882952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.883009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.883229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.883263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.883413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.883440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.883535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.883569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.883652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.883680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.883831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.883889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.884019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.884063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.884161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.884191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.884332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.884371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.884463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.884491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.884587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.884615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.884710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.884737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.884883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.884936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.885058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.885088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.885245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.885283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.487  qpair failed and we were unable to recover it.
00:26:07.487  [2024-12-09 04:16:35.885396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.487  [2024-12-09 04:16:35.885423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.885509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.885537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.885668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.885703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.885864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.885895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.886029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.886060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.886149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.886181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.886332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.886359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.886449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.886475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.886633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.886659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.886807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.886834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.886947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.886995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.887098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.887127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.887245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.887279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.887425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.887452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.887564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.887590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.887799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.887864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.888039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.888110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.888202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.888231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.888376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.888403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.888525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.888551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.888679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.888732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.888982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.889046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.889232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.889295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.889402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.889429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.889509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.889536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.889679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.889711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.889905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.889934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.890203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.890231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.890382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.890409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.890525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.890571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.890684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.890717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.890954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.891017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.891172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.891201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.891347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.891374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.891471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.891497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.891637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.891663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.891782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.891833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.892034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.892104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.488  qpair failed and we were unable to recover it.
00:26:07.488  [2024-12-09 04:16:35.892281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.488  [2024-12-09 04:16:35.892327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.892446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.892473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.892566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.892594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.892685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.892739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.892991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.893022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.893179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.893208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.893349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.893376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.893499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.893527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.893617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.893702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.893832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.893892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.894058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.894093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.894268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.894303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.894408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.894434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.894509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.894535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.894612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.894638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.894750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.894839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.895068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.895097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.895218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.895247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.895490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.895530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.895874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.895911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.896099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.896128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.896224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.896254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.896379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.896410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.896540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.896586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.896733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.896798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.896957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.897126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.897250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.897404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.897519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.897691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.897837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.897961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.897991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.898082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.898113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.898269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.898325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.898445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.898472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.898582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.898614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.898795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.898825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.489  [2024-12-09 04:16:35.898918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.489  [2024-12-09 04:16:35.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.489  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.899031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.899060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.899176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.899218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.899304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.899332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.899421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.899448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.899565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.899591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.899667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.899693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.899840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.899931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.900055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.900085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.900177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.900221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.900332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.900359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.900463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.900489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.900700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.900726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.900835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.900950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.901013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.901174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.901203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.901342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.901369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.901488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.901514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.901625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.901651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.901746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.901792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.901985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.902014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.902256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.902291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.902388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.902416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.902501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.902527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.902643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.902670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.902827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.902901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.903061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.903092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.903196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.903228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.903382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.903410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.903496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.903523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.903663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.903728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.903896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.903957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.904170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.904220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.904354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.490  [2024-12-09 04:16:35.904382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.490  qpair failed and we were unable to recover it.
00:26:07.490  [2024-12-09 04:16:35.904496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.904523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.904730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.904774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.904965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.904994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.905091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.905120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.905234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.905269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.905375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.905404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.905559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.905589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.905736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.905782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.905903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.905948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.906057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.906088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.906222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.906248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.906363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.906391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.906500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.906527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.906634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.906660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.906830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.906894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.907090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.907120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.907244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.907280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.907400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.907429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.907541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.907574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.907704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.907737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.907865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.907910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.908177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.908210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.908394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.908425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.908509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.908537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.908645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.908673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.908781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.908807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.908891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.908919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.909031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.909079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.909349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.909380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.909482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.909511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.909689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.909723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.909817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.909857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.910091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.910158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.910361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.910394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.910491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.910534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.910678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.910704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.910864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.910919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.911134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.491  [2024-12-09 04:16:35.911163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.491  qpair failed and we were unable to recover it.
00:26:07.491  [2024-12-09 04:16:35.911290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.911320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.911433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.911477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.911598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.911627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.911734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.911761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.911847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.911875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.911986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.912029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.912127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.912156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.912247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.912400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.912427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.912584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.912614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.912761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.912790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.912986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.913050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.913326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.913356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.913478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.913506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.913672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.913706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.913877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.913955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.914224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.914257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.914410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.914441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.914531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.914561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.914728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.914783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.914926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.914979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.915100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.915131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.915285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.915315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.915498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.915553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.915715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.915772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.915901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.915961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.916084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.916113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.916192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.916221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.916424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.916458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.916593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.916644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.916731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.916758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.916874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.916901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.917018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.917046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.917181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.917211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.917354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.917402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.917540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.917590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.917684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.917712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.917850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.492  [2024-12-09 04:16:35.917883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.492  qpair failed and we were unable to recover it.
00:26:07.492  [2024-12-09 04:16:35.918034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.918063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.918184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.918214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.918377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.918407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.918526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.918560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.918784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.918848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.919143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.919177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.919296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.919324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.919450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.919479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.919712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.919746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.919864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.919916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.920130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.920163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.920318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.920345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.920458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.920484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.920598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.920628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.920792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.920826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.921006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.921070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.921320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.921521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.921547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.921661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.921859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.921885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.921993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.922019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.922179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.922212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.922363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.922413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.922505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.922532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.922645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.922671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.922802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.922834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.922965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.922998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.923150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.923194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.923336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.923381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.923499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.923526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.923737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.923796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.924025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.924052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.924200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.924229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.924389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.924419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.924535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.924564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.924689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.924718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.924832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.924882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.924998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.925025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.493  qpair failed and we were unable to recover it.
00:26:07.493  [2024-12-09 04:16:35.925147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.493  [2024-12-09 04:16:35.925173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.925348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.925375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.925449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.925475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.925561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.925588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.925673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.925700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.925861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.925890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.926019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.926048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.926158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.926190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.926321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.926351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.926477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.926506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.926693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.927003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.927070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.927350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.927377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.927467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.927493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.927629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.927656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.927762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.927796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.927952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.928001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.928122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.928152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.928283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.928313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.928407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.928437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.928646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.928693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.928838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.928888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.929033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.929066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.929262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.929380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.929409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.929587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.929645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.929813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.929869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.930102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.930153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.930305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.930348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.930460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.930487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.930627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.930733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.930759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.930902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.930928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.931062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.931091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.494  [2024-12-09 04:16:35.931185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.494  [2024-12-09 04:16:35.931216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.494  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.931353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.931383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.931504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.931533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.931716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.931782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.932076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.932141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.932394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.932424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.932666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.932730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.932927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.932973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.933186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.933220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.933344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.933374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.933495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.933526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.933669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.933702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.933887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.933919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.934156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.934222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.934461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.934488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.934630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.934656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.934762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.934810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.935029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.935094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.935377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.935421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.935587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.935787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.935824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.935985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.936151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.936296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.936413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.936552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.936663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.936795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.936936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.936965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.937047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.937076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.937175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.937204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.937356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.937390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.937543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.937621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.937896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.937928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.938062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.938094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.938196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.938229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.938368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.495  [2024-12-09 04:16:35.938398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.495  qpair failed and we were unable to recover it.
00:26:07.495  [2024-12-09 04:16:35.938512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.938540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.938831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.938857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.938994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.939020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.939131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.939158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.939311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.939340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.939464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.939502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.939608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.939658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.939837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.939868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.939988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.940020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.940116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.940147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.940301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.940333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.940490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.940520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.940616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.940645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.940794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.940823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.941033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.941098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.941353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.941383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.941503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.941532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.941739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.941768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.941885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.941914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.942035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.942211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.942240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.942384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.942411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.942534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.942561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.942675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.942703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.942915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.942978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.943229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.943313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.943453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.943497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.943638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.943664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.943741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.943786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.944122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.944338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.944370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.944498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.944527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.496  qpair failed and we were unable to recover it.
00:26:07.496  [2024-12-09 04:16:35.944730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.496  [2024-12-09 04:16:35.944756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.944867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.944893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.945008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.945035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.945132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.945191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.945324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.945357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.945463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.945493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.945640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.945701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.945840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.945887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.946086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.946112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.946228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.946254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.946344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.946370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.946560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.946590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.946716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.946745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.946868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.946897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.946985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.947015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.947136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.947167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.947308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.947338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.947443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.947472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.947603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.947631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.947767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.947795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.947924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.947951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.948085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.948111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.948187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.948214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.948360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.948390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.948589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.948653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.948876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.948941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.949188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.949253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.949477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.949519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.949633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.949661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.949865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.949897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.950029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.950062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.950153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.950186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.950337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.950364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.950504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.950530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.950664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.950708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.950821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.497  [2024-12-09 04:16:35.950847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.497  qpair failed and we were unable to recover it.
00:26:07.497  [2024-12-09 04:16:35.950933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.950987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.951229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.951255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.951376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.951403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.951560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.951589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.951676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.951705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.951851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.951884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.952026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.952053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.952168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.952193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.952333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.952363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.952519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.952546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.952655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.952682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.952815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.952879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.953072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.953099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.953182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.953208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.953328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.953355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.953451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.953477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.953600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.953634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.953741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.953777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.954012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.954076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.954311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.954435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.954465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.954568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.954607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.954751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.954783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.954936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.954966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.955191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.955217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.955301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.955327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.955425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.955455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.955640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.955705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.956009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.956041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.956183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.956227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.956315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.956343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.956462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.956488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.956652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.956716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.957004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.957070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.957379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.498  [2024-12-09 04:16:35.957408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.498  qpair failed and we were unable to recover it.
00:26:07.498  [2024-12-09 04:16:35.957519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.957548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.957841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.957905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.958120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.958186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.958422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.958452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.958575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.958604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.958759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.959014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.959047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.959186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.959216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.959370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.959400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.959568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.959595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.959713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.959739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.959962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.960028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.960320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.960349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.960437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.960471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.960583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.960609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.960717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.960743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.960831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.960875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.960971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.961018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.961125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.961151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.961304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.961359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.961500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.961530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.961728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.961791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.962012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.962077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.962425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.962455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.962580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.962609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.962792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.962859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.963107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.963134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.963253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.963287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.963403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.963430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.963540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.963567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.963682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.963708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.963868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.963894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.964033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.499  [2024-12-09 04:16:35.964060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.499  qpair failed and we were unable to recover it.
00:26:07.499  [2024-12-09 04:16:35.964237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.964311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.964453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.964497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.964587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.964615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.964742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.964772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.964918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.964947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.965213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.965239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.965387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.965491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.965522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.965665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.965724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.965939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.965987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.966197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.966226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.966387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.966653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.966917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.966976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.967221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.967247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.967402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.967429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.967534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.967563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.967663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.967692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.967782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.967811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.967896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.967924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.968018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.968044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.968153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.968248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.968586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.968659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.968961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.969029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.969343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.969407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.969720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.969787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.970152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.970427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.970491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.970786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.970852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.971165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.971232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.971528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.971562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.971673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.971717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.971858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.971885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.972107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.972134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.972275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.972308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.972453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.972487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.972770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.500  qpair failed and we were unable to recover it.
00:26:07.500  [2024-12-09 04:16:35.973082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.500  [2024-12-09 04:16:35.973149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.973420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.973468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.973584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.973611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.973775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.973808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.973970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.974012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.974097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.974124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.974241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.974267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.974386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.974447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.974688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.974749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.974966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.975015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.975131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.975159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.975287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.975315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.975430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.975457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.975623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.975688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.975996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.976061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.976313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.976392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.976637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.976663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.976804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.976830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.977071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.977137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.977428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.977491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.977764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.977825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.978133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.978160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.978282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.978310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.978448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.978482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.978685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.978713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.978830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.978857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.979017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.979082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.979405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.979467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.979769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.979795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.979920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.979945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.980112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.980181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.980517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.980544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.980657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.980685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.980792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.980819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.981000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.981065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.981305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.981368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.981571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.501  [2024-12-09 04:16:35.981616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.501  qpair failed and we were unable to recover it.
00:26:07.501  [2024-12-09 04:16:35.981708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.981740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.981858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.981885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.982031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.982057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.982160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.982186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.982323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.982391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.982696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.982762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.983061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.983126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.983349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.983397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.983540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.983567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.983798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.983864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.984110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.984175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.984435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.984464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.984549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.984575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.984757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.984784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.985037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.985104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.985323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.985389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.502  [2024-12-09 04:16:35.985695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.502  [2024-12-09 04:16:35.985722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.502  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.985807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.985832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.986016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.986082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.986352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.986419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.986715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.986780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.987031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.987097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.987392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.987460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.987721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.987754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.987848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.987882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.988029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.988056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.988155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.988316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.988414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.988691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.988726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.988838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.988873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.989061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.989086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.989209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.989235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.989462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.989529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.989754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.989818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.990063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.990127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.990416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.990482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.990667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.990732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.990959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.990992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.991094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.991127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.991386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.783  [2024-12-09 04:16:35.991417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.783  qpair failed and we were unable to recover it.
00:26:07.783  [2024-12-09 04:16:35.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.991571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.991714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.991747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.991855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.991889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.991996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.992023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.992115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.992173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.992425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.992491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.992750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.992815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.993080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.993107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.993222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.993249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.993404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.993471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.993761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.993827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.994089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.994154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.994362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.994429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.994674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.994709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.994897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.994924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.995009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.995036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.995112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.995139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.995239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.995285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.995470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.995540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.995744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.995810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.996106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.996170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.996445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.996512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.996767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.996793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.996904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.996930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.997103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.997129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.997280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.997308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.997516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.997581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.997878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.997942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.998194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.998262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.998586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.998651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.998954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.999018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.999318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.999385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.999633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.999660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.999748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:35.999775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:35.999936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.000001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.000286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.000313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.000421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.000445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.000529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.000556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.000675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.000701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.000833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.000898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.001182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.001260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.001569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.001596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.001687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.784  [2024-12-09 04:16:36.001714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.784  qpair failed and we were unable to recover it.
00:26:07.784  [2024-12-09 04:16:36.001808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.001850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.002081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.002145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.002433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.002499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.002687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.002752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.003035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.003098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.003364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.003398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.003533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.003566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.003739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.003765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.003876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.003903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.004096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.004160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.004425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.004491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.004795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.004860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.005130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.005194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.005475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.005502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.005617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.005644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.005843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.005869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.005977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.006003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.006193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.006258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.006530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.006594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.006851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.006916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.007104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.007169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.007445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.007473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.007591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.007618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.007733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.007759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.007958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.007988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.008102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.008130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.008303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.008378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.008606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.008671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.008931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.008997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.009213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.009246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.009449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.009516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.009815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.009879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.010128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.010469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.010503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.010630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.010663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.010797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.010829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.011038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.011102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.011399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.011478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.011735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.011761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.011875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.011902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.012021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.012047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.012186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.012212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.785  qpair failed and we were unable to recover it.
00:26:07.785  [2024-12-09 04:16:36.012380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.785  [2024-12-09 04:16:36.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.012680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.012745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.012990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.013016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.013157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.013183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.013259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.013316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.013619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.013683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.013989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.014053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.014340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.014406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.014653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.014679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.014766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.014792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.014878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.014923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.015149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.015203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.015538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.015800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.015827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.015939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.015966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.016203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.016267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.016535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.016602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.016832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.016865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.017026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.017058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.017195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.017259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.017578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.017605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.017722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.017746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.017909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.017974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.018225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.018281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.018405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.018431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.018547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.018581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.018692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.018725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.018867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.018901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.019004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.019037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.019191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.019217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.019329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.019356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.019467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.019501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.019645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.019671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.019800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.019835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.019936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.019971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.020118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.020175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.020436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.020472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.020619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.020654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.020823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.020857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.021109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.021173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.021390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.021424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.021579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.021612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.021746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.786  [2024-12-09 04:16:36.021779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.786  qpair failed and we were unable to recover it.
00:26:07.786  [2024-12-09 04:16:36.021892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.021924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.022113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.022177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.022396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.022430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.022552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.022585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.022713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.022745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.022910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.022943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.023113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.023145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.023390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.023529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.023578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.023741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.023774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.023987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.024021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.024224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.024292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.024428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.024461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.024624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.024656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.024891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.024955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.025211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.025293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.025486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.025519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.025654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.025688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.025901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.025967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.026205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.026297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.026457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.026490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.026638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.026672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.026837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.027190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.027253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.027480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.027515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.027623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.027671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.027807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.027840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.028077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.028141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.028387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.028421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.028572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.028642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.028907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.028940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.029051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.029084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.029180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.029213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.029356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.029390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.029547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.029581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.029722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.029771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.029908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.029941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.030099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.030132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.030346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.030381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.030491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.787  [2024-12-09 04:16:36.030525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.787  qpair failed and we were unable to recover it.
00:26:07.787  [2024-12-09 04:16:36.030655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.030689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.030933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.030997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.031251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.031339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.031484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.031518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.031725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.031789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.032086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.032150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.032416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.032451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.032597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.032631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.032860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.032923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.033168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.033235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.033405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.033440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.033563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.033597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.033736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.033797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.033998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.034064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.034349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.034383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.034500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.034534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.034663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.034698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.034903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.034937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.035132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.035198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.035387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.035428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.035543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.035577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.035710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.035779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.036020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.036085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.036355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.036389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.036534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.036569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.036792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.036856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.037137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.037202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.037474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.037509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.037717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.037781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.038067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.038129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.038425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.038492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.038781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.038815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.038955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.038990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.039206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.039286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.039523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.039588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.039834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.039898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.040097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.040161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.040411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.040476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.040718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.040783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.041061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.041127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.041312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.041378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.041666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.041730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.788  [2024-12-09 04:16:36.041917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.788  [2024-12-09 04:16:36.041981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.788  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.042269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.042689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.043015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.043078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.043379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.043433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.043738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.043803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.044095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.044159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.044477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.044694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.044972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.045035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.045316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.045382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.045696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.045761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.046006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.046073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.046367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.046433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.046724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.046788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.046970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.047034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.047321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.047387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.047635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.047710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.047956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.048023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.048291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.048357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.048649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.048714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.048923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.048987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.049259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.049299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.049508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.049543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.049753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.049787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.049953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.049986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.050260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.050353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.050606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.050673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.050927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.050992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.051261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.051345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.051657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.051722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.051970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.052034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.052325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.052392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.052592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.052656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.052956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.053020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.053259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.053339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.053600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.053664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.053934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.053968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.054110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.054144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.054245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.054311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.054499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.054566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.054818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.054885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.055173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.055236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.055515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.055580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.055877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.055943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.789  [2024-12-09 04:16:36.056238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.789  [2024-12-09 04:16:36.056319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.789  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.056545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.056609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.056842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.056907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.057215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.057297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.057586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.057650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.057902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.057965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.058217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.058313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.058506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.058571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.058760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.058825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.059114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.059177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.059389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.059457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.059695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.059761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.060050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.060126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.060387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.060452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.060692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.060756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.061030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.061093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.061390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.061644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.061709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.061948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.062015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.062306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.062372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.062681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.062746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.062994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.063058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.063307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.063373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.063627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.063691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.063953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.064017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.064234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.064316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.064592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.064656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.064934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.064998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.065248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.065333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.065628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.065692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.065933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.066216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.066311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.066576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.066643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.066902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.066967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.067261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.067346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.067631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.067697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.067939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.068003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.068244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.068336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.068591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.068657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.068923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.068987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.069288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.069354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.069662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.069727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.070012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.070077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.070361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.070428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.070732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.790  [2024-12-09 04:16:36.070796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.790  qpair failed and we were unable to recover it.
00:26:07.790  [2024-12-09 04:16:36.071099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.071163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.071433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.071498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.071778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.071842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.072026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.072090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.072388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.072453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.072699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.072764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.073054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.073119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.073391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.073467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.073736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.073800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.074097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.074163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.074429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.074495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.074787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.074851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.075157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.075221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.075535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.075600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.075796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.075861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.076112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.076177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.076454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.076520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.076769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.076834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.077119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.077184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.077486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.077551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.077734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.077798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.078063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.078128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.078353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.078420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.078679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.078744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.078998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.079065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.079322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.079389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.079608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.079675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.079968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.080031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.080250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.080331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.080573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.080639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.080886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.080950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.081164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.081227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.081460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.081527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.081813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.081876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.082140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.082206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.082473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.082540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.082829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.082894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.083189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.791  [2024-12-09 04:16:36.083253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.791  qpair failed and we were unable to recover it.
00:26:07.791  [2024-12-09 04:16:36.083518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.083583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.083869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.083933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.084173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.084238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.084497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.084562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.084801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.084865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.085192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.085508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.085574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.085770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.085835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.086078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.086143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.086446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.086523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.086824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.086889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.087173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.087237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.087502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.087567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.087860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.087923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.088158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.088223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.088449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.088514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.088765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.088829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.089070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.089133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.089385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.089450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.089738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.089801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.090010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.090073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.090303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.090370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.090660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.090723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.091023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.091088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.091338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.091412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.091700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.091764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.091982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.092046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.092336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.092402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.092695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.092760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.093049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.093112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.093357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.093421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.093717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.093782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.094035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.094102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.094400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.094466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.094757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.094821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.095114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.095177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.095508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.095576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.095870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.095934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.096185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.096249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.096556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.096621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.096808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.096872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.097119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.097183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.097492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.097557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.792  [2024-12-09 04:16:36.097808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.792  [2024-12-09 04:16:36.097876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.792  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.098128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.098194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.098483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.098550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.098844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.098908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.099205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.099270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.099544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.099608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.099858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.099935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.100232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.100314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.100604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.100669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.100930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.100994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.101303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.101368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.101665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.101729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.102008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.102074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.102296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.102363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.102650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.102715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.103003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.103067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.103317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.103382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.103673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.103738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.104033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.104097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.104296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.104363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.104634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.104699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.104989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.105054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.105315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.105379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.105632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.105695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.105944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.106007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.106321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.106386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.106674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.106738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.107033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.107097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.107392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.107457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.107702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.107766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.108049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.108114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.108393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.108459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.108651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.108716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.108943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.109009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.109300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.109365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.109589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.109653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.109895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.109957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.110172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.110236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.110549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.110613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.110912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.110976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.111245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.111539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.111603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.111852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.111917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.793  [2024-12-09 04:16:36.112106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.793  [2024-12-09 04:16:36.112170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.793  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.112535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.112750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.112817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.113048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.113123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.113421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.113487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.113681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.113745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.113932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.113998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.114247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.114330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.114619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.114683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.114929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.114993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.115270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.115346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.115587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.115654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.115945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.116009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.116249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.116327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.116571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.116637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.116895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.116960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.117244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.117327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.117639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.117704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.117990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.118053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.118373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.118558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.118622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.118914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.118978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.119268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.119352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.119556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.119619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.119869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.119932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.120131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.120199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.120506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.120570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.120815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.120882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.121177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.121242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.121522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.121586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.121833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.121899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.122147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.122212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.122405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.122470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.122710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.122774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.123037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.123102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.123408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.123474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.123718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.123784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.124136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.124388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.124453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.124743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.124807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.125102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.125165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.125469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.125535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.794  [2024-12-09 04:16:36.125893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.794  qpair failed and we were unable to recover it.
00:26:07.794  [2024-12-09 04:16:36.126183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.126258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.126546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.126611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.126904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.126967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.127287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.127353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.127642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.127706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.127949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.128013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.128302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.128367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.128615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.128680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.128875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.128942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.129239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.129338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.129629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.129694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.129940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.130004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.130256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.130339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.130628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.130692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.130992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.131056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.131352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.131418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.131710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.131774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.132060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.132124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.132360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.132426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.132739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.132995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.133058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.133244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.133325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.133611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.133676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.133924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.133988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.134294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.134359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.134655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.134720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.134961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.135025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.135338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.135404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.135656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.135721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.135969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.136034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.136306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.136372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.136575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.136639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.136927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.136990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.137320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.137386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.137648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.137713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.137953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.138017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.138238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.138542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.795  [2024-12-09 04:16:36.138606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.795  qpair failed and we were unable to recover it.
00:26:07.795  [2024-12-09 04:16:36.138851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.138918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.139215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.139297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.139541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.139616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.139862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.139926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.140124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.140188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.140492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.140557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.140801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.140867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.141119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.141183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.141453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.141519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.141769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.141833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.142113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.142178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.142449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.142517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.142761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.142827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.143037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.143104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.143336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.143403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.143610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.143675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.143983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.144048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.144298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.144364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.144577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.144640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.144938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.145002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.145258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.145347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.145624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.145689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.145932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.145997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.146301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.146366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.146637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.146701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.146994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.147059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.147350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.147416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.147673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.147737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.147990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.148304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.148371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.148627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.148692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.148955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.149020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.149263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.149357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.149613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.149679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.149992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.150057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.150344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.150411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.150706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.150769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.151072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.151136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.151327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.151396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.151617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.151687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.151953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.152311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.152378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.152658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.152733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.796  [2024-12-09 04:16:36.152987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.796  [2024-12-09 04:16:36.153053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.796  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.153305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.153373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.153672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.153736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.154011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.154075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.154362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.154430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.154682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.154748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.155035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.155100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.155321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.155389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.155644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.155708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.156057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.156316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.156637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.156702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.156969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.157034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.157327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.157395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.157698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.157762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.158052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.158118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.158375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.158446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.158738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.158802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.158992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.159059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.159342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.159408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.159700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.159764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.160030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.160094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.160336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.160401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.160689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.160755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.161053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.161118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.161416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.161482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.161791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.161855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.162100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.162168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.162481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.162550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.162825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.162890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.163153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.163218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.163479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.163545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.163814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.163878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.164119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.164183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.164453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.164517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.164745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.164808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.165018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.165082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.165310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.165376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.165598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.165666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.165965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.166039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.166330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.166399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.166758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.167052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.167116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.797  [2024-12-09 04:16:36.167366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.797  [2024-12-09 04:16:36.167432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.797  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.167715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.167780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.168032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.168097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.168317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.168383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.168618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.168682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.168876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.168939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.169164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.169232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.169460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.169495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.169660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.169832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.169866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.170139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.170204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.170405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.170440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.170539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.170573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.170740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.170774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.170894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.170928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.171078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.171114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.171287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.171324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.171439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.171474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.171677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.171742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.172042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.172105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.172370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.172405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.172531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.172571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.172849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.172915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.173212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.173278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.173441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.173475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.173639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.173674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.173837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.173901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.174190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.174256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.174479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.174514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.174674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.174740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.175028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.175093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.175327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.175361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.175504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.175537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.175696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.175731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.175897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.175931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.176095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.176129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.176282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.176317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.176494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.176528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.176750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.176813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.177063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.177128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.177332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.177366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.177482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.177516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.177684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.177718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.177866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.798  [2024-12-09 04:16:36.177903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.798  qpair failed and we were unable to recover it.
00:26:07.798  [2024-12-09 04:16:36.178012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.178046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.178188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.178222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.178431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.178465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.178593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.178626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.178743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.178801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.178989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.179284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.179324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.179430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.179466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.179569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.179603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.179757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.179792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.179934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.179968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.180070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.180215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.180250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.180406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.180438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.180550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.180594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.180726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.180759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.180886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.180918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.181099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.181164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.181409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.181443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.181545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.181703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.181736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.181887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.181919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.182103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.182168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.182392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.182427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.182572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.182606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.182803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.182867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.183053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.183116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.183357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.183392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.183577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.183653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.183950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.184013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.184250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.184302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.184472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.184506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.184712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.184775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.185049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.185113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.185362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.185396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.185540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.185611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.185856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.185889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.185995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.186053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.799  qpair failed and we were unable to recover it.
00:26:07.799  [2024-12-09 04:16:36.186290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.799  [2024-12-09 04:16:36.186325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.186469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.186508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.186686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.186751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.187008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.187072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.187292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.187327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.187497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.187530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.187666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.187701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.187835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.187870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.188041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.188076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.188239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.188284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.188416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.188467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.188673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.188734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.188903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.188963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.189159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.189215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.189333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.189367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.189552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.189629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.189766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.189836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.190005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.190060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.190202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.190236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.190522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.190597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.190848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.190904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.191046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.191086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.191181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.191214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.191418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.191475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.191602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.191679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.191854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.191908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.192045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.192078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.192245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.192291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.192451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.192505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.192678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.192736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.192927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.192980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.193124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.193267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.193311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.193428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.193463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.193582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.193616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.193751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.193785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.193929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.193964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.194094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.194127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.194246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.194293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.194403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.194442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.194583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.194617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.194748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.194782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.194915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.194948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.195061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.800  [2024-12-09 04:16:36.195094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.800  qpair failed and we were unable to recover it.
00:26:07.800  [2024-12-09 04:16:36.195205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.195239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.195430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.195465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.195613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.195646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.195774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.195807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.195913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.195947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.196046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.196081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.196246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.196292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.196409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.196443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.196627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.196821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.196855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.197032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.197072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.197246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.197388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.197422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.197586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.197638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.197804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.197838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.197983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.198016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.198155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.198188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.198320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.198355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.198657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.198690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.198858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.198892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.199028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.199061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.199201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.199235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.199404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.199440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.199542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.199579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.199707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.199740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.199873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.199907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.200052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.200086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.200194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.200227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.200348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.200382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.200508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.200543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.200697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.200730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.200838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.200872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.201010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.201046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.201187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.201221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.201343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.201377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.201479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.201513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.201692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.201729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.201826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.201860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.202002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.202036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.202194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.202228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.202357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.202391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.202505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.202540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.202689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.202723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.801  qpair failed and we were unable to recover it.
00:26:07.801  [2024-12-09 04:16:36.202820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.801  [2024-12-09 04:16:36.202854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.203044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.203114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.203258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.203326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.203478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.203516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.203630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.203665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.203833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.203868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.203996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.204039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.204155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.204198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.204325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.204360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.204465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.204500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.204626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.204660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.204768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.204802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.204946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.204981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.205099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.205136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.205279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.205314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.205421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.205455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.205601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.205644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.205763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.205797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.205905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.205939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.206084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.206117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.206231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.206283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.206390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.206425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.206564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.206599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.206745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.206779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.206889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.206923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.207061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.207096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.207252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.207317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.207447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.207491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.207630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.207682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.207835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.207871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.207984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.208018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.208125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.208160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.208280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.208317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.208428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.208462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.208569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.208608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.208736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.208769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.208915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.208948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.209062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.209095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.209225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.209259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.209389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.209423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.209534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.209568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.209701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.209734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.209861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.209894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.209999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.210034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.210136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.210173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.210317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.210352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.210468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.802  [2024-12-09 04:16:36.210511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.802  qpair failed and we were unable to recover it.
00:26:07.802  [2024-12-09 04:16:36.210635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.210673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.210827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.210862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.211012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.211047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.211197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.211232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.211357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.211398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.211503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.211537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.211714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.211756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.211876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.211911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.212043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.212107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.212270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.212330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.212465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.212501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.212620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.212667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.212799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.212843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.212957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.212992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.213136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.213179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.213292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.213328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.213442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.213475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.213625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.213658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.213804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.213838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.213938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.214105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.214139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.214303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.214338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.214465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.214498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.214637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.214671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.214813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.214847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.214961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.214994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.215128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.215162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.215340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.215375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.215482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.215516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.215630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.215665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.215815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.215848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.215963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.215996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.216118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.216151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.216257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.216309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.216427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.216461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.216600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.216638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.216781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.216817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.216963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.216998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.803  [2024-12-09 04:16:36.217124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.803  [2024-12-09 04:16:36.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.803  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.217296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.217333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.217454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.217488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.217626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.217668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.217810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.217844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.217972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.218006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.218133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.218171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.218314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.218365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.218490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.218527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.218639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.218674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.218772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.218805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.218985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.219099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.219135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.219250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.219302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.219420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.219454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.219570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.219604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.219739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.219783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.219883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.219918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.220051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.220085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.220227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.220343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.220378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.220486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.220520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.220655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.220688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.220815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.220850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.221014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.221057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.221200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.221236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.221407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.221526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.221560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.221698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.221732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.221845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.221884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.221996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.222033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.222175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.222212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.222328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.222363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.222476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.222511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.222639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.222673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.222815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.222849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.223019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.223053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.223194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.223240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.223374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.223409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.223528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.223563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.223672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.223707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.223884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.223919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.224033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.804  [2024-12-09 04:16:36.224068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.804  qpair failed and we were unable to recover it.
00:26:07.804  [2024-12-09 04:16:36.224209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.224244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.224377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.224411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.224521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.224555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.224724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.224758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.224863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.224897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.225025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.225059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.225185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.225219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.225352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.225388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.225512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.225546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.225738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.225773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.225906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.225941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.226078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.226113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.226288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.226339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.226449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.226482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.226579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.226613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.226745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.226778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.226911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.226944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.227047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.227096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.227191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.227222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.227336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.227369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.227492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.227527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.227658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.227697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.227866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.227900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.228039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.228074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.228179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.228326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.228359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.228474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.228507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.228642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.228676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.228853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.228895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.229022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.229054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.229183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.229216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.229353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.229387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.229493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.229526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.229691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.229733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.229875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.229909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.230032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.230067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.230946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.231004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.231156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.231185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.231315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.231343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.231445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.231474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.231625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.231654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.231735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.231764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.231883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.231912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.231991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.805  [2024-12-09 04:16:36.232019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.805  qpair failed and we were unable to recover it.
00:26:07.805  [2024-12-09 04:16:36.232129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.232157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.232294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.232323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.232422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.232450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.232596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.232624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.232753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.232784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.232909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.232937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.233031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.233058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.233201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.233242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.233421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.233469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.233643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.233704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.233855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.233905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.234071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.234105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.234234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.234300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.234435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.234469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.234606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.234640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.234793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.234827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.234998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.235047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.235166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.235194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.235312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.235340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.235460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.235509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.235649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.235698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.235836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.235882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.236002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.236042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.236128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.236156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.236301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.236330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.236432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.236466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.236568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.236598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.236720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.236748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.236898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.236930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.237075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.237109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.237190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.237217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.237320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.237353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.237447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.237477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.237599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.237628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.237741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.237798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.237924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.237952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.238111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.238139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.238256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.238300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.238389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.238416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.238505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.238533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.238652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.238680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.238784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.238812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.238896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.238924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.239004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.239033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.806  [2024-12-09 04:16:36.239161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.806  [2024-12-09 04:16:36.239189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.806  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.239296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.239325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.239422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.239459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.239560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.239591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.239735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.239764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.239864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.239893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.240028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.240126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.240161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.240256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.240291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.240445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.240480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.240573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.240607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.240699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.240740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.240884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.240919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.241045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.241086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.241233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.241278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.241373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.241402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.241492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.241520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.241687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.241721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.241859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.241899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.242035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.242073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.242225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.242331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.242360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.242449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.242477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.242612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.242660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.242804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.242851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.242995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.243045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.243129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.243157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.243283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.243312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.243439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.243468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.243549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.243588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.243691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.243725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.243838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.243866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.244013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.244041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.244125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.244154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.244255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.244295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.244416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.244444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.244587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.244615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.244711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.244739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.244858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.244885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.245008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.807  [2024-12-09 04:16:36.245036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.807  qpair failed and we were unable to recover it.
00:26:07.807  [2024-12-09 04:16:36.245151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.245180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.245317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.245349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.245467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.245495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.245622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.245650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.245739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.245775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.245856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.245885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.246021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.246050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.246142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.246175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.246307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.246336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.246432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.246460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.246583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.246617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.246718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.246752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.246894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.246929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.247043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.247073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.247163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.247192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.247292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.247320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.247422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.247456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.247617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.247669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.247811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.247858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.247981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.248008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.248153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.248239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.248266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.248361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.248389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.248514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.248542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.248656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.248683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.248830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.248857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.248983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.249132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.249252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.249399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.249519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.249656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.249815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.249935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.249963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.250056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.250087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.250207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.250250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.250375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.250406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.250515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.250543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.250668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.250697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.250893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.250929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.251047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.251102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.251225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.251363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.251393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.251497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.251525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.251754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.808  [2024-12-09 04:16:36.251788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.808  qpair failed and we were unable to recover it.
00:26:07.808  [2024-12-09 04:16:36.251905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.251940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.252058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.252091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.252244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.252287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.252385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.252413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.252504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.252532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.252686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.252715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.252843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.252897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.253077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.253109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.253255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.253300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.253395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.253430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.253520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.253551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.253726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.253776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.253905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.253967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.254072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.254106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.254250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.254294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.254401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.254430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.254527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.254573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.254722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.254752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.254846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.254895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.255007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.255055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.255192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.255226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.255370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.255399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.255487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.255515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.255680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.255714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.255829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.255861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.256052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.256087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.256263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.256298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.256393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.256421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.256512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.256540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.256640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.256669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.256801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.256830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.256951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.257001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.257183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.257217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.257353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.257382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.257475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.257503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.257670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.257704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.257840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.257885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.258033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.258078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.258204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.258232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.258339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.258369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.258464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.258492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.258617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.258654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.258764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.809  [2024-12-09 04:16:36.258793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.809  qpair failed and we were unable to recover it.
00:26:07.809  [2024-12-09 04:16:36.258886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.258915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.259096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.259130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.259299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.259347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.259463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.259491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.259588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.259624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.259837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.259892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.260084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.260118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.260306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.260353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.260453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.260488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.260627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.260656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.260758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.260805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.260964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.260998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.261135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.261302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.261348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.261477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.261506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.261620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.261649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.261763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.261795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.261929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.261968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.262063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.262096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.262184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.262230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.262343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.262372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.263215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.263261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.263427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.263458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.264479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.264513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.264674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.264704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.265474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.265508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.265649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.265680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.265798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.265828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.265926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.265959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.266083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.266112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.266206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.266335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.266365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.266454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.266484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.266610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.266639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.266769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.266808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.266902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.266931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.267094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.267123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.267279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.267313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.267406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.267435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.267556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.267592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.267696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.267724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.267826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.267862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.268012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.268047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.268129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.810  [2024-12-09 04:16:36.268169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.810  qpair failed and we were unable to recover it.
00:26:07.810  [2024-12-09 04:16:36.268304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.268440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.268481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.268608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.268642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.268760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.268794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.268915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.268944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.269059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.269087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.269207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.269304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.269334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.269457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.269486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.269615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.269654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.269775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.269805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.269934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.269963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.270115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.270143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.270266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.270307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.270402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.270430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.270549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.270680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.270709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.270812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.270840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.270924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.271067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.271095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.271180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.271210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.271311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.271340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.271438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.271466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.271600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.271627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.271717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.271744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.271869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.271900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.272018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.272047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.272160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.272202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.272314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.272345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.272437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.272466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.272560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.272598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.272766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.272816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.272929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.272958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.273088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.273115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.273209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.273237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.273345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.273374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.273484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.273513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.273675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.273703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.273793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.273827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.273943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.811  [2024-12-09 04:16:36.273974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.811  qpair failed and we were unable to recover it.
00:26:07.811  [2024-12-09 04:16:36.274090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.274119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.274212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.274240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.274378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.274406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.274507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.274535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.274657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.274685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.274801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.274830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.274914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.274943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.275031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.275060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.275208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.275236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.275354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.275397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.275522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.275551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.275683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.275713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.275834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.275863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.275962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.275990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.276111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.276139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.276265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.276317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.276422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.276452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.276544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.276577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.276701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.276729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.276865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.276911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.277053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.277215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.277354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.277477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.277592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.277752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.277897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.277987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.278015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.278137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.278165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.278296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.278448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.278481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.278637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.278665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.278784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.278813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.278937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.278965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.279058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.279086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.279202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.279231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.279337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.279367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.279457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.279485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.279586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.279615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.279703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.279731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.279844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.279881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.280029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.280059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.280149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.280179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.280299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.280328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.280427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.280455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.280555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.812  [2024-12-09 04:16:36.280594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.812  qpair failed and we were unable to recover it.
00:26:07.812  [2024-12-09 04:16:36.280716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.280743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.280890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.280918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.281027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.281056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.281180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.281212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.281328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.281356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.281446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.281475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.281610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.281638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.281751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.281780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.281928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.281957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.282104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.282132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.282251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.282286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.282385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.282414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.282515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.282543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.282695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.282724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.282813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.282841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.282932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.282960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.283045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.283074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.283195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.283238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.283363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.283394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.283485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.283514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.283630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.283658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.283752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.283782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.283882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.283911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.284911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.284940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.285049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.285077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.285223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.285251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.285355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.285384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.285474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.285502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.285644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.285673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.285785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.285814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.285934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.286117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.286146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.286284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.286328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.286421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.286450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.813  [2024-12-09 04:16:36.286544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.813  [2024-12-09 04:16:36.286586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.813  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.286708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.286736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.286868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.286913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.287037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.287082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.287240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.287293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.287397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.287427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.287527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.287556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.287650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.287678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.287788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.287818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.287942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.287971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.288102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.288147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.288282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.288312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.288414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.288445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.288543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.288583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.288740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.288788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.288936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.288965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.289089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.289118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.289214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.289243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.289347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.289376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.289457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.289486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.289640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.289669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.289789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.289819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.289909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.289938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.290027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.290061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.290165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.290208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.290331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.290362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.290459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.290487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.290615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.290648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.290832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.290877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.291091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.291144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.291296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.291328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.291412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.291440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.291531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.291560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.291732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.291861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.291908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.291994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.292022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.292168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.292197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.292318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.292450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.292480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.292614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.292653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.292770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.292797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.292917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.292945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.293067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.293097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.293219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.293249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.814  [2024-12-09 04:16:36.293381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.814  [2024-12-09 04:16:36.293410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.814  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.293502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.293532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.293666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.293712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.293876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.293921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.294036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.294064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.294178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.294206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.294303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.294337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.294430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.294458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.294576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.294627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.294744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.294774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.294861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.294889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.295004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.295033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.295154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.295182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.295313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.295344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.295442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.295470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.295596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.295624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.295742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.295771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.295912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.295939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.296036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.296064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.296174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.296202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.296306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.296336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.296438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.296467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.296564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.296602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.296720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.296749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.296869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.296897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.297022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.297050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.297178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.297208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.297308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.297337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.297461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.297490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.297588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.297618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.297763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.297791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.297917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.297959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.298113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.298143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.298261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.298297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.298394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.298422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.298512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.298542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.298657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.815  [2024-12-09 04:16:36.298701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.815  qpair failed and we were unable to recover it.
00:26:07.815  [2024-12-09 04:16:36.298848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.298897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.299948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.299977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.300126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.300154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.300297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.300327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.300445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.300474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.300566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.300601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.300691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.300720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.300863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.300891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.301008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.301036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.301161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.301193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.301337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.301380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.301483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.301516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.301644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.301673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.301801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.301829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.301950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.301985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.302108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.302136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.302300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.302329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.302443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.302472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.302595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.302623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.302703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.302731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.302935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.302985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.303124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.303173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.303345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.303436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.303464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.303549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.303587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.303732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.303760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.303836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.303864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.303950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.304138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.304180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.304299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.304335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.304436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.304465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.304584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.304634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.304777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.304806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.304966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.305013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.305171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.305201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.305294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.305323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.305410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.305438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.816  [2024-12-09 04:16:36.305523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.816  [2024-12-09 04:16:36.305551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.816  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.305712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.305755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.305918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.305962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.306076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.306104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.306227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.306259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.306381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.306411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.306510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.306557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.306672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.306716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.306891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.306940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.307122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.307172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.307290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.307333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.307458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.307488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.307613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.307641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.307860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.307893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.308048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.308096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.308218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.308246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.308385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.308416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.308554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.308592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.308711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.308745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.308846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.308880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.309052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.309100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.309189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.309217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.309343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.309373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.309461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.309491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.309636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.309664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.309788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.309815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.309946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.309988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.310119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.310150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.310315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.310401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.310430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.310596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.310641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.310815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.310862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.310996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.311044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.311199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.311228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.311332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.311362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.311484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.311513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.311664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.311707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.311882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.311934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.312065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.312099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.312260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.312320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.312445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.312473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.312565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.312592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.312711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.312738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.312854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.312941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.817  [2024-12-09 04:16:36.312971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.817  qpair failed and we were unable to recover it.
00:26:07.817  [2024-12-09 04:16:36.313132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.313175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.313302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.313345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.313470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.313499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.313631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.313661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.313757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.313786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.313885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.313915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.314015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.314043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.314184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.314227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.314344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.314376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.314501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.314532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.314684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.314729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.314844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.314872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.314973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.315089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.315245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.315389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.315500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.315649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.315798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.315965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.315993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.316119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.316147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.316252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.316325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.316416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.316447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.316569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.316598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.316716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.316744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.316841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.316884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.316988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.317144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.317305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.317451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.317572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.317686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.317829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.317954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.317984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.318094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.318136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.318278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.318309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.318438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.318468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.318616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.318644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.318762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.318790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.318949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.318978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.319067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.319097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.319217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.319250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.319384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.319412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.818  [2024-12-09 04:16:36.319502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.818  [2024-12-09 04:16:36.319530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.818  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.319617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.319645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.319764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.319792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.319874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.319988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.320016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.320112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.320142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.320258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.320304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.320423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.320452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.320570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.320598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.320723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.320751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.320910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.321025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.321053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.321159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.321188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.321306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.321335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.321420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.321449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.321594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.321622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.321766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.321794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.321883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.321912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.322008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.322037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.322181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.322209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.322340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.322369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.322486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.322514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.322637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.322665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.322783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.322811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.322924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.322952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.323071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.323209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.323360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.323478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.323602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.323746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.323893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.323982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.324010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.324158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.324186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.324267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.324300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.324391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.324420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.324519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.324548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.819  [2024-12-09 04:16:36.324660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.819  [2024-12-09 04:16:36.324687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.819  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.324775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.324803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.324897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.324926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.325047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.325074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.325195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.325223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.325386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.325490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.325533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.325687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.325717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.325838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.325867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.325994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.326135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.326163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.326287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.326317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.326417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.326445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.326538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.326576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.326719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.326747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.326831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.326863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.327001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.327095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.327125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.327247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.327286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.327375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.327405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.327520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.327551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.327711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.327760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.327852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.327974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.328002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.328123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.328152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.328243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.328280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.328430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.328459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.328609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.328638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.328756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.328784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.328914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.328942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.329086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.329127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.329285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.329316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.329412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.329443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.329564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.329592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.329679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.329707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.329828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.329856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.329946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.329976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.330111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.330154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.330267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.330304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.330422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.330450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.330530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.330568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.330689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.330818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.330848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.330969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.330999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.331118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.331147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.331261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.331296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.820  qpair failed and we were unable to recover it.
00:26:07.820  [2024-12-09 04:16:36.331413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.820  [2024-12-09 04:16:36.331442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.331562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.331591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.331813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.331872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.332041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.332088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.332209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.332237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.332378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.332409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.332506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.332534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.332632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.332660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.332750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.332777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.332901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.332933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.333048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.333076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.333168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.333198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.333299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.333329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.333413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.333441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.333534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.333574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.333701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.333730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.333855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.333883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.334032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.334062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.334192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.334234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.334347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.334379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.334477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.334510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.334630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.334676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.334774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.334802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.334907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.334935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.335034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.335063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.335165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.335194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.335293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.335325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.335437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.335479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.335657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.335699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.335798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.335828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.335952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.336045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.336073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.336198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.336228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.336344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.336373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:07.821  [2024-12-09 04:16:36.336478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:07.821  [2024-12-09 04:16:36.336506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:07.821  qpair failed and we were unable to recover it.
00:26:08.097  [2024-12-09 04:16:36.336717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.097  [2024-12-09 04:16:36.336753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.097  qpair failed and we were unable to recover it.
00:26:08.097  [2024-12-09 04:16:36.336916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.097  [2024-12-09 04:16:36.336959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.097  qpair failed and we were unable to recover it.
00:26:08.097  [2024-12-09 04:16:36.337099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.097  [2024-12-09 04:16:36.337135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.097  qpair failed and we were unable to recover it.
00:26:08.097  [2024-12-09 04:16:36.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.097  [2024-12-09 04:16:36.337383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.337479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.337506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.337591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.337618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.337831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.337955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.338003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.338145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.338179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.338300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.338327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.338415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.338442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.338532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.338559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.338697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.338731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.338842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.338869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.339023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.339056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.339204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.339247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.339370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.339413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.339513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.339543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.339661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.339690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.339794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.339845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.339987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.340035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.340156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.340184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.340301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.340331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.340415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.340443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.340558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.340587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.340685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.340713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.340841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.340869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.340996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.341024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.341162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.341191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.341294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.341329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.341425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.341455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.341579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.341607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.341754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.098  [2024-12-09 04:16:36.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.098  qpair failed and we were unable to recover it.
00:26:08.098  [2024-12-09 04:16:36.341864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.341891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.342011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.342040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.342186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.342217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.342330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.342373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.342476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.342519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.342617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.342647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.342823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.342875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.343086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.343142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.343254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.343299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.343415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.343443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.343595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.343622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.343762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.343808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.343939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.343987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.344097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.344125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.344220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.344249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.344398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.344441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.344585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.344627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.344733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.344763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.344851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.344879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.344992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.345020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.345130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.345173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.345336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.345366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.345465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.345493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.345643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.345670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.345791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.345957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.346005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.346153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.346181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.346280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.346309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.346455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.346483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.346642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.346672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.346797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.346825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.346947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.099  [2024-12-09 04:16:36.346976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.099  qpair failed and we were unable to recover it.
00:26:08.099  [2024-12-09 04:16:36.347122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.347149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.347233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.347262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.347369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.347397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.347481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.347514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.347602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.347630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.347742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.347770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.347883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.348024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.348068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.348164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.348207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.348361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.348392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.348487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.348516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.348660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.348705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.348845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.348891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.349028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.349073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.349216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.349245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.349365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.349396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.349521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.349567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.349727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.349774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.349879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.349913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.350080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.350109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.350240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.350267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.350373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.350401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.350511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.350539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.350699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.350726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.350839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.350867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.351028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.351057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.351218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.351246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.351388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.351419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.351542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.351570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.351710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.351738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.351823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.351853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.351994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.100  [2024-12-09 04:16:36.352036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.100  qpair failed and we were unable to recover it.
00:26:08.100  [2024-12-09 04:16:36.352219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.352268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.352409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.352438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.352551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.352579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.352665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.352693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.352813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.352844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.352962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.353105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.353133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.353252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.353289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.353376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.353405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.353529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.353558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.353680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.353709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.353824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.353861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.353984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.354164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.354305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.354424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.354540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.354719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.354840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.354960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.354988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.355105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.355133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.355252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.355292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.355412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.355442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.355589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.355617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.355706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.355734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.355863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.355892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.356019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.356049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.356196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.356225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.356382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.356412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.356538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.356585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.101  [2024-12-09 04:16:36.356701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.101  [2024-12-09 04:16:36.356730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.101  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.356856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.356884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.357002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.357031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.357148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.357177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.357305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.357334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.357451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.357480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.357591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.357619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.357734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.357763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.357961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.358066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.358095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.358183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.358212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.358333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.358363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.358449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.358478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.358603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.358632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.358774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.358803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.358954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.358983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.359112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.359140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.359254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.359289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.359377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.359406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.359566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.359609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.359739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.359792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.360124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.360204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.360384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.360413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.360514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.360542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.360632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.360662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.360820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.360848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.361059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.361094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.361225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.361254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.361376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.361405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.361524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.361552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.361672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.361701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.102  [2024-12-09 04:16:36.361815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.102  [2024-12-09 04:16:36.361843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.102  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.361977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.362020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.362155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.362185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.362304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.362334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.362429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.362456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.362609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.362637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.362733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.362760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.362895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.362945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.363094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.363144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.363254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.363290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.363389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.363419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.363520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.363549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.363702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.363731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.363852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.363880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.364001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.364029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.364167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.364210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.364368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.364399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.364509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.364551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.364694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.364754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.364971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.365036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.365260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.365300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.365438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.365466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.365586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.365630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.365841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.365875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.365982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.366030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.366139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.103  [2024-12-09 04:16:36.366173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.103  qpair failed and we were unable to recover it.
00:26:08.103  [2024-12-09 04:16:36.366313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.366358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.366447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.366475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.366615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.366648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.366873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.366937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.367199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.367232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.367397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.367425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.367520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.367564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.367705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.367749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.367974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.368008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.368168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.368202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.368383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.368427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.368535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.368567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.368677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.368707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.368812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.368841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.368939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.368968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.369109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.369139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.369285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.369314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.369490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.369630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.369672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.369793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.369822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.369915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.369944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.370063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.370091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.370190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.370218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.370311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.370340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.370487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.370517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.370603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.370632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.370758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.370787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.370963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.370993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.371115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.371157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.371298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.371326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.371429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.371459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.104  [2024-12-09 04:16:36.371590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.104  [2024-12-09 04:16:36.371628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.104  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.371823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.371878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.371973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.372002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.372117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.372161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.372240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.372370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.372398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.372519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.372549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.372689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.372719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.372825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.372870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.372971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.373000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.373162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.373191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.373293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.373322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.373437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.373465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.373628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.373657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.373778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.373807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.373907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.373935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.374091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.374236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.374357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.374477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.374596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.374716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.374844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.374994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.375022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.375137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.375165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.375263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.375299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.375399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.375428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.375547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.375589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.375700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.375730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.375918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.375947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.376062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.376091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.376185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.105  [2024-12-09 04:16:36.376213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.105  qpair failed and we were unable to recover it.
00:26:08.105  [2024-12-09 04:16:36.376327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.376355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.376499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.376527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.376643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.376677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.376867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.376901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.377144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.377364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.377393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.377541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.377568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.377736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.377770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.377912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.377959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.378106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.378159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.378353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.378528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.378556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.378769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.378802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.378937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.378986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.379130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.379164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.379332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.379395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.379529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.379561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.379723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.379753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.379848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.379877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.380024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.380070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.380309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.380340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.380468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.380497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.380601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.380645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.380860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.381028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.381064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.381234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.381302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.381453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.381482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.381632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.381661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.381787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.381816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.381973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.382037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.382307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.106  [2024-12-09 04:16:36.382358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.106  qpair failed and we were unable to recover it.
00:26:08.106  [2024-12-09 04:16:36.382473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.382502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.382656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.382686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.382816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.382845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.383035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.383069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.383216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.383250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.383411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.383440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.383535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.383564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.383753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.383787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.383944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.383978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.384142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.384176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.384325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.384354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.384502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.384531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.384731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.384793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.385152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.385185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.385366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.385395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.385542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.385591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.385708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.385742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.385960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.386024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.386257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.386314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.386456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.386485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.386613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.386642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.386739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.386768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.387015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.387079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.387290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.387319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.387414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.387444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.387537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.387599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.387862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.387925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.388218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.388321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.388460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.388489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.388635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.388664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.388812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.107  [2024-12-09 04:16:36.388846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.107  qpair failed and we were unable to recover it.
00:26:08.107  [2024-12-09 04:16:36.389065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.389131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.389353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.389383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.389477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.389506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.389622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.389682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.389974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.390037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.390287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.390337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.390432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.390461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.390588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.390622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.390816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.390880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.391102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.391136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.391315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.391432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.391461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.391590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.391619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.391871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.391939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.392179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.392232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.392379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.392409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.392497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.392526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.392617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.392666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.392878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.392913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.393080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.393149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.393412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.393442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.393573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.393652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.393899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.393964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.394230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.394264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.394384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.394414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.394537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.394567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.394665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.394695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.394901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.394929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.395069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.395120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.395264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.108  [2024-12-09 04:16:36.395299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.108  qpair failed and we were unable to recover it.
00:26:08.108  [2024-12-09 04:16:36.395503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.395532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.395761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.395790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.395915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.395945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.396074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.396103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.396218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.396253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.396414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.396444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.396584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.396629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.396780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.396809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.396888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.396917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.397060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.397093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.397256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.397325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.397598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.397638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.397782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.397815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.397956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.397989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.398200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.398264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.398450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.398484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.398616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.398649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.398794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.398828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.398969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.399002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.399118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.399151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.399307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.399341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.399469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.399803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.399868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.400198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.400338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.400372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.400549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.400629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.400869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.400944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.401233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.109  [2024-12-09 04:16:36.401310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.109  qpair failed and we were unable to recover it.
00:26:08.109  [2024-12-09 04:16:36.401609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.401684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.401914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.401977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.402178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.402246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.402608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.402643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.402813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.402846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.403037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.403101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.403352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.403418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.403716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.403789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.404026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.404089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.404344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.404410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.404708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.404770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.405036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.405100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.405351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.405417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.405710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.405774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.406028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.406092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.406349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.406413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.406668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.406732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.406954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.407018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.407243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.407500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.407729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.407797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.408047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.408114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.408412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.408447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.408581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.408616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.408759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.408793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.409099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.409163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.409439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.409505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.409805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.409869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.410158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.410221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.410529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.410603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.410859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.110  [2024-12-09 04:16:36.410923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.110  qpair failed and we were unable to recover it.
00:26:08.110  [2024-12-09 04:16:36.411171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.411235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.411502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.411569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.411822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.411890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.412108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.412173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.412384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.412450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.412692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.412758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.413006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.413070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.413327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.413394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.413698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.413761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.413990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.414053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.414314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.414379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.414750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.415035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.415098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.415397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.415472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.415717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.415781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.416020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.416084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.416331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.416384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.416501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.416536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.416753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.416818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.417004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.417073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.417294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.417362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.417631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.417705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.417949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.418191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.418255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.418534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.418598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.418884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.418948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.419243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.419325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.419581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.419645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.419883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.419948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.420239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.420328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.420561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.111  [2024-12-09 04:16:36.420596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.111  qpair failed and we were unable to recover it.
00:26:08.111  [2024-12-09 04:16:36.420712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.420746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.420847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.420881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.421027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.421063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.421321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.421385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.421690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.421754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.422054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.422120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.422339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.422402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.422610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.422674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.422963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.423028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.423327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.423393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.423620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.423684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.423950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.423983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.424121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.424155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.424323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.424357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.424646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.424708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.425001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.425034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.425148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.425184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.425441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.425480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.425596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.425630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.425766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.425800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.425937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.425970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.426214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.426290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.426501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.426565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.426850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.426913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.427167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.427231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.427501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.427534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.427671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.427706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.427909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.427974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.428148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.428211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.428517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.428593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.428893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.112  [2024-12-09 04:16:36.428957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.112  qpair failed and we were unable to recover it.
00:26:08.112  [2024-12-09 04:16:36.429255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.429335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.429572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.429606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.429747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.429781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.429886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.429919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.430135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.430214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.430516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.430592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.430839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.430902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.431092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.431156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.431387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.431422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.431587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.431896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.432071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.432135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.432389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.432455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.432753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.432816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.433124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.433188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.433468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.433533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.433776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.433842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.434110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.434144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.434239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.434279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.434490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.434524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.434669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.434703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.434869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.434932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.435184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.435248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.435542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.435797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.435863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.436162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.436196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.436340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.436374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.436518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.436552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.436774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.436838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.437070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.437104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.437218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.437251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.437437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.113  [2024-12-09 04:16:36.437501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.113  qpair failed and we were unable to recover it.
00:26:08.113  [2024-12-09 04:16:36.437787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.437851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.438062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.438127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.438318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.438383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.438615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.438679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.438917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.438980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.439310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.439374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.439623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.439687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.439966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.439999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.440138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.440188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.440520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.440590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.440797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.440860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.441121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.441155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.441331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.441451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.441484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.441656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.441898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.441954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.442160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.442233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.442504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.442568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.442765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.442828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.443045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.443109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.443333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.443396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.443565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.443598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.443799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.443878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.444144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.444178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.114  qpair failed and we were unable to recover it.
00:26:08.114  [2024-12-09 04:16:36.444320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.114  [2024-12-09 04:16:36.444354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.444492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.444527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.444720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.444784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.445065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.445128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.445366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.445432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.445722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.445786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.446082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.446115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.446255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.446295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.446479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.446544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.446774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.446807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.446955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.446988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.447224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.447322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.447627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.447690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.447969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.448033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.448293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.448359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.448617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.448651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.448827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.448906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.449168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.449232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.449502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.449566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.449844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.449907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.450152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.450216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.450458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.450524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.450747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.450782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.450912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.450945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.451060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.451094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.451269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.451655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.451719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.452028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.452092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.452355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.452420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.452714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.452783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.453021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.115  [2024-12-09 04:16:36.453085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.115  qpair failed and we were unable to recover it.
00:26:08.115  [2024-12-09 04:16:36.453332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.453397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.453672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.453735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.454019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.454082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.454382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.454447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.454715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.454748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.454890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.454924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.455051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.455085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.455199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.455234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.455498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.455563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.455815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.455885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.456074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.456138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.456352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.456417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.456646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.456710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.456999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.457062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.457348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.457413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.457702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.457777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.458051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.458114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.458337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.458402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.458662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.458726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.459003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.459037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.459142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.459176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.459390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.459465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.459718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.459782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.460021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.460094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.460335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.460399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.460689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.460757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.461029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.461074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.461207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.461241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.461526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.461590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.461885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.461950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.462244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.116  [2024-12-09 04:16:36.462330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.116  qpair failed and we were unable to recover it.
00:26:08.116  [2024-12-09 04:16:36.462616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.462650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.462785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.462820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.462934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.462968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.463095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.463134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.463417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.463483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.463677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.463711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.463828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.463861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.464104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.464174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.464436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.464501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.464738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.464803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.465101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.465167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.465378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.465442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.465691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.465755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.466038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.466103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.466386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.466421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.466552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.466586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.466731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.466770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.466873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.466906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.467171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.467235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.467417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.467481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.467699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.467761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.468024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.468088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.468307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.468372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.468659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.468723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.469007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.469041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.469147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.469182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.469397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.469432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.469532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.469566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.469676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.469709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.469960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.470023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.470319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.470386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.470649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.117  [2024-12-09 04:16:36.470702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.117  qpair failed and we were unable to recover it.
00:26:08.117  [2024-12-09 04:16:36.470856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.470893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.471216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.471323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.471585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.471621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.471812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.471883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.472185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.472260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.472611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.472685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.472973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.473047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.473310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.473390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.473662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.473737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.473992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.474070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.474374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.474464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.474711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.474792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.475100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.475193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.475497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.475537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.475667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.475703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.475942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.476020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.476285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.476361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.476640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.476708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.477004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.477039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.477184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.477222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.477513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.477591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.477905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.477990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.478259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.478343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.478649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.478715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.479016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.479081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.479347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.479383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.479506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.479542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.479728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.479805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.480072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.480108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.480258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.480336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.480596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.480669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.118  [2024-12-09 04:16:36.480978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.118  [2024-12-09 04:16:36.481044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.118  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.481318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.481390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.481659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.481733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.481986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.482043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.482191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.482227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.482389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.482453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.482774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.482852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.483149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.483216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.483482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.483565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.483827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.483901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.484131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.484203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.484529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.484616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.484894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.484962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.485188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.485253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.485583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.485958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.486026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.486334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.486402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.486657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.486724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.486998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.487032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.487237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.487558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.487633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.487899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.487976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.488239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.488321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.488604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.488687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.488984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.489057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.489336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.489371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.489489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.489540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.489707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.489742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.489847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.489882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.490099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.490404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.490490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.119  [2024-12-09 04:16:36.490755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.119  [2024-12-09 04:16:36.490837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.119  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.491146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.491212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.491491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.491557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.491876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.491943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.492225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.492318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.492585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.492659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.492933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.493001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.493332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.493411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.493700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.493771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.494004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.494070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.494343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.494422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.494694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.494734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.494952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.495020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.495343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.495422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.495699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.495765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.496092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.496171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.496451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.496497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.496596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.496636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.496789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.496828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.497139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.497211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.497527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.497604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.497892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.497958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.498265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.498348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.498638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.498705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.499064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.499154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.499427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.120  [2024-12-09 04:16:36.499494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.120  qpair failed and we were unable to recover it.
00:26:08.120  [2024-12-09 04:16:36.499799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.499864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.500114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.500208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.500500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.500580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.500859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.500926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.501151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.501245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.501644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.501861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.502186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.502233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.502422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.502484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.502753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.502827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.503131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.503208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.503543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.503803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.503871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.504091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.504157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.504405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.504441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.504586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.504621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.504885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.504952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.505170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.505235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.505543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.505578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.505731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.505766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.505971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.506048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.506346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.506435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.506735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.506804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.507016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.507084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.507371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.507440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.507765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.507842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.508122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.508178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.508330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.508366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.508544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.508626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.508892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.121  [2024-12-09 04:16:36.508957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.121  qpair failed and we were unable to recover it.
00:26:08.121  [2024-12-09 04:16:36.509294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.509362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.509624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.509701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.510007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.510086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.510339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.510406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.510661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.510696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.510872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.510965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.511188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.511262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.511606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.511641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.511756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.512030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.512105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.512376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.512412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.512615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.512700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.512952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.513030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.513349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.513432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.513688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.513762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.514114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.514322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.514396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.514637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.514711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.514962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.515028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.515321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.515388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.515649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.515715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.515972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.516042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.516333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.516400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.516660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.516726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.517016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.517081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.517329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.517399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.517661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.517727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.518032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.518107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.518407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.518475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.518771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.518838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.519091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.122  [2024-12-09 04:16:36.519157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.122  qpair failed and we were unable to recover it.
00:26:08.122  [2024-12-09 04:16:36.519484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.519551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.519800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.519868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.520125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.520189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.520493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.520559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.520814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.521026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.521092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.521391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.521469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.521765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.521800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.521932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.521966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.522193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.522262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.522569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.522662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.522969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.523030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.523306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.523340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.523527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.523589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.523878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.523940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.524138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.524171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.524318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.524351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.524553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.524625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.524927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.525002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.525310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.525376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.525637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.525710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.525978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.526044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.526354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.526421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.526724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.526790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.527098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.527164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.527473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.527539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.527788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.527854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.528143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.528210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.528519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.528590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.528840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.528910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.529160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.123  [2024-12-09 04:16:36.529195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.123  qpair failed and we were unable to recover it.
00:26:08.123  [2024-12-09 04:16:36.529345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.529381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.529642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.529707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.530002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.530067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.530357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.530425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.530716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.530781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.531075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.531139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.531421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.531488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.531783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.531848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.532080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.532287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.532342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.532568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.532634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.532820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.532887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.533147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.533212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.533530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.533596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.533900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.533964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.534258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.534341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.534631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.534695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.534950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.535017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.535322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.535390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.535586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.535661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.535948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.536014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.536263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.536344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.536636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.536700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.536999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.537063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.537387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.537524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.537558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.537694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.537728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.537974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.538040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.538249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.538349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.538608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.124  [2024-12-09 04:16:36.538929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.124  [2024-12-09 04:16:36.538995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.124  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.539247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.539335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.539575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.539641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.539885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.539950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.540236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.540319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.540582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.540648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.540858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.540923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.541177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.541243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.541597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.541806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.541873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.542133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.542198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.542538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.542606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.542854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.542918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.543177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.543242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.543608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.543682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.543926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.543991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.544289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.544357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.544600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.544667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.544896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.544960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.545262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.545353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.545556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.545624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.545877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.545946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.546198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.546265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.546570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.546635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.546851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.546918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.547149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.547215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.547485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.547554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.547808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.547873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.548160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.548226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.125  [2024-12-09 04:16:36.548509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.125  [2024-12-09 04:16:36.548585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.125  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.548836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.548904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.549207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.549298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.549560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.549626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.549885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.549951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.550166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.550200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.550419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.550486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.550706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.550775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.551043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.551078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.551185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.551220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.551428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.551497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.551706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.551773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.552031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.552097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.552344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.552412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.552669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.552735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.552986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.553054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.553285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.553352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.553618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.553683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.553907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.553941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.554087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.554121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.554408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.554485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.554735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.554801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.555086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.555150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.555438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.555504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.555794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.555859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.556147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.556212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.126  [2024-12-09 04:16:36.556441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.126  [2024-12-09 04:16:36.556519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.126  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.556782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.556850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.557052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.557120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.557418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.557496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.557791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.557856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.558157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.558231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.558533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.558599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.558896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.558971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.559228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.559312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.559573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.559638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.559886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.559951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.560183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.560476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.560541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.560793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.560858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.561087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.561161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.561437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.561504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.561747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.561813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.562068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.562132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.562437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.562505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.562768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.562835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.563137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.563213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.563484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.563552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.563849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.563925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.564172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.564238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.564543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.564609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.564841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.564906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.565205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.565293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.565521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.565847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.565915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.566212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.566310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.566547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.566582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.566722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.566757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.127  [2024-12-09 04:16:36.566985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.127  [2024-12-09 04:16:36.567019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.127  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.567163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.567219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.567478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.567513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.567609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.567644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.567743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.567778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.567949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.567983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.568092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.568128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.568292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.568345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.568515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.568583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.568821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.568890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.569086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.569151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.569376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.569442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.569703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.569768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.570009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.570075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.570364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.570429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.570716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.570781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.570971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.571036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.571257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.571337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.571603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.571638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.571741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.571776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.571880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.571913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.572041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.572074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.572262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.572415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.572731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.572801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.573044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.573110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.573420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.573500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.573766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.573835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.574129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.574195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.574471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.574538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.574787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.574853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.575111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.575176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.575456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.575523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.575743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.128  [2024-12-09 04:16:36.575810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.128  qpair failed and we were unable to recover it.
00:26:08.128  [2024-12-09 04:16:36.576033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.576099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.576369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.576435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.576693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.576759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.577034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.577103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.577307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.577374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.577578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.577643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.577894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.577963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.578224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.578302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.578539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.578605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.578866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.578934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.579227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.579307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.579557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.579624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.579867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.579901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.580039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.580073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.580249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.580291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.580432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.580486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.580752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.580849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.581106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.581174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.581407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.581478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.581748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.581815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.582033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.582103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.582406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.582474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.582774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.582847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.583089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.583156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.583401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.583467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.583768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.583842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.584104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.584168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.584451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.584516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.584763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.584798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.584966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.585007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.585266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.585312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.585430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.585464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.585646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.129  [2024-12-09 04:16:36.585711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.129  qpair failed and we were unable to recover it.
00:26:08.129  [2024-12-09 04:16:36.585961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.586027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.586302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.586337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.586475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.586510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.586776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.587082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.587147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.587370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.587437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.587728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.587794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.588004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.588068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.588366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.588442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.588738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.588803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.589062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.589128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.589352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.589419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.589614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.589678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.589915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.589979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.590235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.590323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.590615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.590679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.590926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.590990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.591218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.591300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.591538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.591602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.591829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.591862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.592016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.592050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.592288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.592354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.592607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.592673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.592946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.593011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.593281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.593316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.593458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.593494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.593661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.593695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.593953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.594018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.594218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.594305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.594611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.594686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.594929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.594993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.595178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.595245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.595474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.595539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.595824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.595890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.130  [2024-12-09 04:16:36.596161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.130  [2024-12-09 04:16:36.596225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.130  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.596501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.596565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.596809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.596884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.597177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.597240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.597514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.597579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.597868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.597932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.598175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.598211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.598360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.598394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.598659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.598693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.598840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.598874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.599012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.599063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.599328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.599395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.599654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.599718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.599980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.600044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.600338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.600372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.600512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.600546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.600766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.600830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.601117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.601182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.601439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.601506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.601799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.601832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.601975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.602008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.602222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.602256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.602391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.602425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.602579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.602642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.602899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.602963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.603203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.603268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.603580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.603645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.603854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.603919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.131  [2024-12-09 04:16:36.604208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.131  [2024-12-09 04:16:36.604290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.131  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.604556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.604623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.604875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.604939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.605167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.605231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.605611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.605676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.605897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.605961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.606170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.606235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.606469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.606535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.606831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.606896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.607150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.607184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.607290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.607325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.607460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.607494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.607613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.607647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.607859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.607924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.608214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.608305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.608608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.608673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.608921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.608986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.609222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.609255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.609431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.609717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.609964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.610030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.610304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.610371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.610596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.610659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.610954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.611018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.611310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.611376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.611667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.611731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.611950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.612014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.612309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.612375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.612645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.612709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.612953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.613019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.613306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.613373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.613622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.613688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.613892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.132  [2024-12-09 04:16:36.613959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.132  qpair failed and we were unable to recover it.
00:26:08.132  [2024-12-09 04:16:36.614185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.614250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.614563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.614626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.614916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.614981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.615295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.615382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.615633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.615697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.615960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.616022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.616290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.616358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.616615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.616679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.616951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.617015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.617311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.617387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.617576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.617628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.617769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.617803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.617948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.617982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.618084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.618118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.618256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.618302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.618467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.618530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.618813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.618847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.618987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.619022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.619224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.619305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.619560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.619626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.619910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.619973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.620220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.620310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.620579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.620644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.620933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.620997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.621293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.621328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.621468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.621503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.621714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.621777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.622020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.622085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.622300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.622365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.622571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.622635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.622891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.622958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.623175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.623240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.623463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.623530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.133  [2024-12-09 04:16:36.623773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.133  [2024-12-09 04:16:36.623837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.133  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.624139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.624172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.624349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.624384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.624681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.624755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.625051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.625114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.625425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.625492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.625715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.625780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.626023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.626086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.626320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.626384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.626601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.626664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.626894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.626957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.627182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.627244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.627513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.627579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.627838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.627905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.628199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.628262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.628636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.629001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.629069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.629327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.629397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.629696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.629763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.630060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.630125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.630413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.630478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.630776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.630841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.631138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.631213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.631470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.631535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.631823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.631888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.632174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.632238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.632542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.632606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.632822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.632886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.633150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.633214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.633504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.633568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.633858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.633922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.634210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.634295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.634558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.634591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.634735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.134  [2024-12-09 04:16:36.634785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.134  qpair failed and we were unable to recover it.
00:26:08.134  [2024-12-09 04:16:36.635070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.635134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.635413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.635478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.635777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.635852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.636134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.636199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.636506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.636582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.636805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.636870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.637151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.637214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.637529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.637593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.637842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.637881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.638026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.638060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.638163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.638196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.638408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.638474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.638766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.638829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.639081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.639145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.639393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.639458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.639753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.639817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.640114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.640177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.640376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.640440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.640717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.640750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.640873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.640908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.641213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.641289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.641504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.641569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.641874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.641908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.642016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.642050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.642321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.642386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.642628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.642692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.642955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.643019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.643317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.643392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.643677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.643741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.644041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.644115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.644324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.644389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.644650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.644715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.645009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.135  [2024-12-09 04:16:36.645042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.135  qpair failed and we were unable to recover it.
00:26:08.135  [2024-12-09 04:16:36.645150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.645186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.645460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.645525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.645780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.645854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.646155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.646231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.646534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.646599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.646889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.646923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.647090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.647124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.647337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.647403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.647662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.647725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.648010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.648074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.648324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.648390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.648567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.648631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.648876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.648940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.649200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.649267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.649551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.649615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.649842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.649906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.650165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.650199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.650307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.650342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.650509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.650542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.650793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.650857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.651123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.651157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.651267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.651311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.651420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.651453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.651572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.651605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.651719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.651753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.651868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.651901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.652045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.652079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.652219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.652304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.136  qpair failed and we were unable to recover it.
00:26:08.136  [2024-12-09 04:16:36.652453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.136  [2024-12-09 04:16:36.652486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.652680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.652741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.653038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.653101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.653310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.653346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.653465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.653498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.653644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.653677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.653784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.653818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.653991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.654023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.654127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.654160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.654327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.654362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.654549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.654614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.654835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.654898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.655148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.655182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.655326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.655446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.655478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.137  [2024-12-09 04:16:36.655626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.137  [2024-12-09 04:16:36.655699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.137  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.655943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.655977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.656117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.656150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.656259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.656299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.656429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.656464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.656567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.656600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.656709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.656741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.656914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.656948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.657112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.657145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.657257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.657300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.657412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.657445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.657546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.417  [2024-12-09 04:16:36.657578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.417  qpair failed and we were unable to recover it.
00:26:08.417  [2024-12-09 04:16:36.657688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.657722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.657868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.657901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.658029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.658062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.658170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.658204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.658348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.658381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.658520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.658553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.658686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.658719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.658860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.658893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.659042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.659077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.659243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.659285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.659385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.659418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.659532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.659566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.659702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.659734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.659887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.659920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.660013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.660045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.660191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.660324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.660357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.660527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.660559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.660838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.660900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.661096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.661157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.661412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.661477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.661737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.661800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.662026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.662090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.662391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.662467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.662719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.662782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.663064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.663129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.663364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.663431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.663652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.663717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.663971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.664034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.664264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.664305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.664437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.664471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.664680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.664745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.664995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.665058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.665298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.665365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.665626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.665690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.665984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.666047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.666332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.666398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.666690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.418  [2024-12-09 04:16:36.666754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.418  qpair failed and we were unable to recover it.
00:26:08.418  [2024-12-09 04:16:36.666996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.667059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.667312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.667378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.667562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.667626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.667916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.667979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.668168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.668242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.668461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.668529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.668790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.668853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.669127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.669161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.669310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.669345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.669547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.669616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.669902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.669965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.670189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.670222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.670385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.670420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.670524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.670568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.670702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.670736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.671004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.671068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.671312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.671379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.671604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.671670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.671918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.671952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.672091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.672147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.672431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.672495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.672791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.672855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.673137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.673170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.673345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.673380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.673590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.673654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.673843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.673906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.674174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.674238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.674594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.674660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.674938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.675001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.675301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.675367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.675672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.675746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.676034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.676107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.676411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.676488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.676783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.676846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.677131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.677165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.677335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.677369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.677587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.677643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.677761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.419  [2024-12-09 04:16:36.677795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.419  qpair failed and we were unable to recover it.
00:26:08.419  [2024-12-09 04:16:36.677961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.677994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.678224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.678315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.678545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.678618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.678875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.678939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.679234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.679314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.679578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.679642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.679867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.679931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.680229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.680313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.680607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.680671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.680916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.680983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.681230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.681312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.681582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.681644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.681895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.681959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.682260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.682342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.682525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.682591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.682876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.682939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.683185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.683249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.683495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.683558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.683831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.683896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.684198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.684262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.684525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.684591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.684892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.684956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.685203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.685268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.685589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.685654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.685946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.686011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.686331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.686397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.686698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.686763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.687053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.687117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.687376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.687441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.687652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.687716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.688000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.688064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.688320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.688385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.688629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.688692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.688911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.688962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.689095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.689129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.689372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.689436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.689710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.689774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.690023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.690090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.420  qpair failed and we were unable to recover it.
00:26:08.420  [2024-12-09 04:16:36.690386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.420  [2024-12-09 04:16:36.690462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.690766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.690830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.691048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.691114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.691325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.691391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.691575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.691639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.691912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.691975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.692245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.692322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.692547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.692613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.692872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.692906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.693025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.693061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.693233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.693579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.693649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.693887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.693937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.694140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.694192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.694365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.694417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.694727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.694971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.695022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.695199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.695250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.695494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.695546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.695752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.695803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.696009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.696060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.696351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.696404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.696658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.696708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.696906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.696967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.697176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.697228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.697459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.697510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.697753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.697804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.698013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.698064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.698331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.698384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.698627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.698677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.698897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.698933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.699072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.699105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.699250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.699300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.699589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.699615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.699727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.699752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.699829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.699854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.699966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.699992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.421  [2024-12-09 04:16:36.700185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.421  [2024-12-09 04:16:36.700242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.421  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.700444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.700470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.700557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.700583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.700694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.700720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.700802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.700828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.700943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.700969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.701051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.701087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.701231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.701257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.701377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.701624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.701682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.701913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.701972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.702158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.702228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.702488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.702526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.702663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.702697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.702832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.702866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.703008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.703042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.703173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.703207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.703460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.703496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.703636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.703670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.703790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.703825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.704038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.704079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.704209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.704242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.704382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.704438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.704713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.704768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.704953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.705010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.705222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.705291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.705527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.705582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.705810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.705865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.706031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.706085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.706349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.706405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.706688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.706743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.706995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.707048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.422  qpair failed and we were unable to recover it.
00:26:08.422  [2024-12-09 04:16:36.707329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.422  [2024-12-09 04:16:36.707364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.707501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.707535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.707750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.707804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.708079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.708156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.708371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.708427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.708647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.708701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.708961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.709017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.709173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.709228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.709501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.709555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.709795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.709865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.710094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.710145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.710310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.710360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.710608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.710660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.710825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.710874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.711122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.711172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.711401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.711478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.711730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.711782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.711994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.712070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.712293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.712346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.712524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.712575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.712817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.712867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.713124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.713177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.713365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.713417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.713597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.713647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.713884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.713918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.714090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.714206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.714239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.714477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.714512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.714681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.714714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.714857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.714890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.715000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.715034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.715140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.715172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.715302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.715344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.715455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.715489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.715645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.715678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.715790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.715822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.715962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.715996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.716145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.716196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.423  qpair failed and we were unable to recover it.
00:26:08.423  [2024-12-09 04:16:36.716365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.423  [2024-12-09 04:16:36.716418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.716615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.716666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.716875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.716925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.717122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.717173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.717415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.717482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.717762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.717812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.718045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.718122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.718320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.718370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.718543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.718606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.718813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.718869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.719019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.719086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.719361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.719439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.719597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.719663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.719788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.719825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.719976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.720038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.720283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.720335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.720521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.720573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.720799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.720850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.721021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.721070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.721218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.721270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.721525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.721598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.721815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.721868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.722066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.722318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.722372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.722609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.722661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.722860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.722909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.723120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.723170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.723431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.723489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.723669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.723727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.723883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.723932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.724207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.724483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.724533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.724802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.724876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.725116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.725186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.725371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.725454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.725700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.725772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.726058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.726123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.726313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.424  [2024-12-09 04:16:36.726363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.424  qpair failed and we were unable to recover it.
00:26:08.424  [2024-12-09 04:16:36.726553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.726602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.726833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.726898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.727154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.727202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.727437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.727487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.727737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.727787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.727991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.728055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.728219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.728267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.728518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.728567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.728716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.728764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.728957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.729005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.729199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.729248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.729488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.729552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.729746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.729794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.730004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.730053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.730286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.730336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.730597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.730644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.730885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.730933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.731163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.731197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.731356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.731393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.731622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.731673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.731869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.731921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.732172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.732222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.732438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.732493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.732725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.732775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.732937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.732986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.733146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.733197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.733415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.733554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.733794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.733846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.734036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.734089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.734354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.734407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.734606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.734654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.734828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.734879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.735059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.735128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.735323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.735374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.735531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.735565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.735709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.735746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.735930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.736003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.736187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.425  [2024-12-09 04:16:36.736237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.425  qpair failed and we were unable to recover it.
00:26:08.425  [2024-12-09 04:16:36.736477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.736527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.736716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.736790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.737031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.737082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.737324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.737379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.737574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.737622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.737798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.737849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.738034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.738084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.738311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.738361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.738584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.738620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.738795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.738829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.739063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.739112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.739337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.739397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.739542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.739593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.739784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.739841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.740039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.740123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.740375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.740426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.740608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.740643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.740753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.740788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.740893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.740928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.741069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.741103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.741288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.741340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.741588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.741625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.741760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.741793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.742014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.742064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.742254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.742327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.742524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.742582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.742821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.742871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.743151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.743216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.743417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.743466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.743612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.743663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.743890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.743940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.744206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.744325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.426  [2024-12-09 04:16:36.744590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.426  qpair failed and we were unable to recover it.
00:26:08.426  [2024-12-09 04:16:36.744793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.744844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.745055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.745124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.745395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.745430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.745576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.745733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.745814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.745977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.746053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.746228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.746291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.746431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.746465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.746631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.746689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.746841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.746886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.747068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.747113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.747293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.747341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.747555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.747601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.747779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.747824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.748067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.748116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.748343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.748393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.748569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.748617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.748813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.748863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.749060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.749107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.749296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.749342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.749509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.749556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.749695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.749743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.749891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.749936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.750106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.750152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.750424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.750541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.750575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.750745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.750798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.750975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.751021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.751200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.751248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.751420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.751467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.751657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.751702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.751883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.751931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.752131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.752179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.752379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.752426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.752648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.752693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.752884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.752931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.753149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.753215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.427  [2024-12-09 04:16:36.753426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.427  [2024-12-09 04:16:36.753473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.427  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.753663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.753710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.753874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.753908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.754039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.754073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.754245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.754289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.754493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.754538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.754746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.754792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.754983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.755029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.755244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.755319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.755491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.755537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.755727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.755774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.755893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.755946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.756126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.756174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.756356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.756405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.756540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.756585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.756766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.756819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.757001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.757048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.757255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.757505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.757550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.757777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.757811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.757951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.757985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.758136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.758185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.758382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.758429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.758614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.758668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.758885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.758931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.759149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.759195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.759391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.759438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.759604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.759650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.759834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.759868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.760047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.760081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.760267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.760325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.760500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.760547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.760763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.760809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.760999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.761045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.761233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.761290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.761479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.761525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.761704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.761749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.761983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.762017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.428  qpair failed and we were unable to recover it.
00:26:08.428  [2024-12-09 04:16:36.762220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.428  [2024-12-09 04:16:36.762265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.762497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.762543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.762723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.762771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.762982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.763027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.763218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.763264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.763482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.763528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.763703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.763748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.763935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.763981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.764110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.764160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.764298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.764333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.764465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.764499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.764631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.764665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.764797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.764831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.764938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.764978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.765199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.765233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.765343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.765377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.765492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.765525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.765676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.765730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.765948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.765994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.766160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.766210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.766349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.766384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.766552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.766585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.766777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.766823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.766987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.767035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.767251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.767307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.767428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.767473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.767650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.767697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.767917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.767962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.768113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.768158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.768329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.768377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.768550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.768583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.768736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.768911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.768959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.769110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.769156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.769356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.769403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.769620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.769667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.769887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.769932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.429  [2024-12-09 04:16:36.770139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.429  [2024-12-09 04:16:36.770203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.429  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.770425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.770472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.770656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.770702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.770849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.770897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.771151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.771216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.771467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.771501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.771669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.771703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.771906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.771953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.772126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.772171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.772352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.772399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.772553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.772603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.772784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.772831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.773010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.773056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.773252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.773312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.773500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.773547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.773724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.773771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.773912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.773966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.774191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.774225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.774406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.774442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.774642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.774688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.774821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.774866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.775047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.775093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.775315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.775350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.775486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.775520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.775655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.775689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.775858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.775892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.776050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.776098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.776294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.776342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.776532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.776565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.776739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.776792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.777066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.777252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.777328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.777504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.777550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.777807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.778023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.778069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.778255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.778315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.778494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.778538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.778721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.778769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.430  qpair failed and we were unable to recover it.
00:26:08.430  [2024-12-09 04:16:36.778978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.430  [2024-12-09 04:16:36.779023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.779207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.779254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.779476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.779523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.779711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.779759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.779920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.779965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.780158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.780207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.780411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.780604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.780651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.780871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.780918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.781113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.781159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.781383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.781430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.781654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.781700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.781908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.781954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.782156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.782220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.782495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.782560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.782776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.782812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.782952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.782987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.783192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.783259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.783520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.783585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.783840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.783905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.784113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.784196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.784443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.784508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.784665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.784742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.784984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.785051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.785321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.785368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.785553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.785587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.785727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.785761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.785912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.785959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.786137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.786183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.786417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.786465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.786654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.786699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.786921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.786967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.787193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.787259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.431  qpair failed and we were unable to recover it.
00:26:08.431  [2024-12-09 04:16:36.787472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.431  [2024-12-09 04:16:36.787517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.787736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.787781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.787954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.788001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.788205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.788268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.788478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.788512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.788740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.788809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.789091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.789155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.789408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.789475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.789717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.789784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.790060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.790124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.790371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.790436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.790661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.790728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.790949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.791033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.791320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.791386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.791590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.791636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.791760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.791806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.791979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.792025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.792290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.792371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.792517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.792565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.792785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.792830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.793004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.793050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.793195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.793244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.793461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.793507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.793689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.793736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.793926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.793972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.794185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.794229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.794464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.794511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.794752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.794818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.795067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.795333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.795381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.795598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.795644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.795872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.796120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.796185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.796357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.796404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.796585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.796638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.796829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.796877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.797050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.797095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.432  qpair failed and we were unable to recover it.
00:26:08.432  [2024-12-09 04:16:36.797285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.432  [2024-12-09 04:16:36.797333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.797504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.797551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.797731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.797955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.798002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.798263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.798352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.798548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.798595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.798813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.798859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.799006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.799053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.799224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.799270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.799460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.799506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.799691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.799738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.799959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.800024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.800321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.800369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.800543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.800845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.800909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.801121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.801196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.801429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.801477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.801676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.801741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.801934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.801980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.802174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.802223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.802435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.802485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.802671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.802719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.802943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.802990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.803178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.803224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.803428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.803475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.803615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.803661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.803842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.803888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.804079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.804125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.804321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.804543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.804589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.804771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.804817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.804988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.805034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.805208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.805254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.805485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.805532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.805751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.805798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.805985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.806207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.806253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.806453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.433  [2024-12-09 04:16:36.806500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.433  qpair failed and we were unable to recover it.
00:26:08.433  [2024-12-09 04:16:36.806697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.806746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.806919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.806967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.807182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.807228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.807436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.807483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.807674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.807720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.807875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.807923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.808101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.808149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.808338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.808386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.808578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.808624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.808762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.808808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.809013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.809059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.809251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.809326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.809542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.809589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.809765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.809810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.809996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.810042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.810185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.810231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.810468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.810514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.810674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.810726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.810910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.810958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.811168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.811213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.811376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.811424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.811637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.811682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.811867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.811912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.812122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.812185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.812372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.812420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.812639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.812703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.812932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.812997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.813189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.813222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.813405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.813440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.813542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.813576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.813727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.813760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.813939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.814001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.814202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.814263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.814445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.814478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.814624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.814658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.814866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.814928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.815196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.815257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.434  qpair failed and we were unable to recover it.
00:26:08.434  [2024-12-09 04:16:36.815459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.434  [2024-12-09 04:16:36.815493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.815631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.815666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.815881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.815926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.816115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.816162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.816382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.816417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.816517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.816571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.816776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.816833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.817026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.817073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.817330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.817366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.817477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.817512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.817651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.817685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.817822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.817857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.818044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.818103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.818351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.818386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.818510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.818544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.818647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.818682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.818916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.818975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.819247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.819337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.819444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.819478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.819644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.819692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.819873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.819941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.820118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.820164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.820391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.820426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.820548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.820582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.820721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.820755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.820951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.821009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.821159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.821207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.821362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.821397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.821534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.821568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.821806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.821991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.822046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.822298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.822351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.822468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.822502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.822644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.822678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.822842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.822888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.823116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.823162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.823431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.823594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.435  [2024-12-09 04:16:36.823642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.435  qpair failed and we were unable to recover it.
00:26:08.435  [2024-12-09 04:16:36.823822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.823870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.824090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.824136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.824264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.824332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.824474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.824508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.824618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.824652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.824825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.824860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.825046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.825093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.825269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.825363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.825508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.825542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.825686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.825720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.825822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.825856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.825974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.826019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.826207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.826253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.826433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.826467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.826615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.826648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.826829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.826876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.827058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.827103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.827245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.827305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.827448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.827482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.827631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.827665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.827853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.827898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.828082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.828129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.828297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.828355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.828491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.828524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.828746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.828791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.828942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.828988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.829131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.829186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.829385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.829420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.829520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.829556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.829728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.829762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.829985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.830019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.830139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.830173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.830283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.436  [2024-12-09 04:16:36.830318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.436  qpair failed and we were unable to recover it.
00:26:08.436  [2024-12-09 04:16:36.830460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.830495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.830723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.830769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.830953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.830987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.831168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.831220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.831400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.831447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.831595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.831660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.831812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.831846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.831997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.832042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.832235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.832293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.832511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.832558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.832734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.832779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.832957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.833010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.833126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.833161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.833335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.833383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.833526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.833581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.833698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.833733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.833891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.833939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.834127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.834174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.834354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.834418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.834554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.834588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.834786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.834833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.834988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.835046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.835213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.835247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.835399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.835454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.835684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.835740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.835990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.836045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.836302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.836370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.836581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.836626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.836844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.836891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.837076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.837116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.837289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.837324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.837522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.837556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.837730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.837784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.838003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.838037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.838156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.838191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.838362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.838397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.838632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.838679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.437  [2024-12-09 04:16:36.838855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.437  [2024-12-09 04:16:36.838903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.437  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.839078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.839125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.839316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.839364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.839519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.839567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.839750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.839795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.840001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.840047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.840319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.840367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.840555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.840601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.840831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.840876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.841060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.841094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.841233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.841268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.841484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.841530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.841704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.841750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.841967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.842012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.842186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.842232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.842434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.842635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.842686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.842887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.842920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.843060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.843093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.843319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.843366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.843580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.843614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.843779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.843838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.844019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.844071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.844212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.844247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.844408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.844453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.844629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.844675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.844854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.844899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.845074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.845120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.845307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.845355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.845546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.845580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.845750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.845783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.845974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.846022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.846219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.846283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.846476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.846521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.846737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.846784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.846964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.847012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.847186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.847232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.847430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.847478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.438  qpair failed and we were unable to recover it.
00:26:08.438  [2024-12-09 04:16:36.847658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.438  [2024-12-09 04:16:36.847704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.847877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.847924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.848140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.848185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.848369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.848416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.848624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.848669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.848857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.848906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.849094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.849140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.849331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.849380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.849569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.849615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.849854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.849902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.850113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.850167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.850374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.850449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.850714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.850784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.851006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.851060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.851267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.851325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.851513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.851558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.851753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.851798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.851956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.852003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.852190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.852235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.852418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.852465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.852609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.852658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.852862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.852909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.853063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.853109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.853245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.853314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.853507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.853553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.853742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.853788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.853970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.854016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.854182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.854228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.854389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.854436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.854653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.854699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.854877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.854924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.855079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.855125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.855312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.855359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.855541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.855587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.855754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.855808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.856018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.856064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.856210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.856257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.856422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.439  [2024-12-09 04:16:36.856470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.439  qpair failed and we were unable to recover it.
00:26:08.439  [2024-12-09 04:16:36.856675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.856721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.856908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.856966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.857144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.857191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.857373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.857422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.857636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.857682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.857910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.857956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.858141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.858186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.858392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.858440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.858625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.858672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.858861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.858907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.859107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.859155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.859388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.859435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.859606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.859651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.859883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.859929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.860108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.860152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.860356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.860404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.860554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.860601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.860819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.860864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.861058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.861110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.861319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.861369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.861517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.861563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.861761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.861815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.862073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.862126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.862354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.862410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.862570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.862635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.862855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.862907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.863166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.863217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.863503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.863575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.863812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.863886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.864140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.864207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.864473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.864545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.864826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.864898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.865097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.440  [2024-12-09 04:16:36.865166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.440  qpair failed and we were unable to recover it.
00:26:08.440  [2024-12-09 04:16:36.865390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.865468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.865738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.865808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.866069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.866123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.866312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.866366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.866546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.866593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.866769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.866816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.867034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.867082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.867259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.867315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.867496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.867544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.867729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.867775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.867957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.868003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.868130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.868175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.868310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.868358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.868575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.868621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.868844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.868890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.869107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.869153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.869351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.869398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.869594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.869640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.869852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.869898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.870098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.870162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.870348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.870396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.870573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.870620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.870795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.870842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.871064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.871110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.871291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.871337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.871458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.871504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.871690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.871736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.871915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.871962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.872195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.872241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.872443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.872490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.872666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.872713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.872939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.872985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.873152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.873199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.873381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.873428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.873574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.873620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.873801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.873849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.874034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.874079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.874316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.441  [2024-12-09 04:16:36.874364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.441  qpair failed and we were unable to recover it.
00:26:08.441  [2024-12-09 04:16:36.874508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.874556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.874740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.874786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.874998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.875044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.875197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.875242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.875443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.875490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.875673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.875727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.875946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.875991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.876181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.876226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.876410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.876458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.876611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.876659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.876839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.876885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.877025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.877071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.877262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.877320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.877501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.877546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.877759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.877804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.877925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.877969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.878140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.878186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.878329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.878377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.878564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.878613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.878808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.878853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.879037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.879083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.879262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.879320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.879533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.879578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.879722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.879770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.879993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.880040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.880216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.880263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.880457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.880503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.880690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.880736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.880956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.881001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.881140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.881186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.881391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.881438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.881655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.881700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.881885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.881932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.882094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.882141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.882331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.882378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.882590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.882636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.882825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.882872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.442  [2024-12-09 04:16:36.883048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.442  [2024-12-09 04:16:36.883093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.442  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.883307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.883355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.883539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.883587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.883756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.883802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.883974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.884019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.884168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.884214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.884352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.884398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.884543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.884589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.884766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.884820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.884977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.885023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.885241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.885297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.885515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.885561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.885737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.885784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.886002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.886048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.886218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.886263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.886449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.886496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.886685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.886730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.886909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.886954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.887089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.887134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.887350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.887397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.887538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.887584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.887805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.887851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.888008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.888053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.888236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.888292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.888509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.888554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.888742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.888788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.888973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.889018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.889190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.889235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.889467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.889514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.889701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.889747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.889899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.889944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.890140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.890188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.890373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.890423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.890644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.890689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.890868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.890914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.891110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.891156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.891372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.891419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.891575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.891633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.443  qpair failed and we were unable to recover it.
00:26:08.443  [2024-12-09 04:16:36.891805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.443  [2024-12-09 04:16:36.891862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.892110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.892162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.892371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.892425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.892699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.892770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.893046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.893098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.893303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.893351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.893483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.893529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.893697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.893743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.893921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.893969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.894126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.894172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.894350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.894405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.894593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.894639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.894795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.894842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.895036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.895083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.895298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.895350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.895522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.895572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.895791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.895836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.896053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.896100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.896293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.896340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.896516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.896562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.896709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.896755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.896914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.896962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.897170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.897215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.897365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.897413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.897562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.897608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.897786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.897833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.898044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.898090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.898292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.898348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.898524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.898569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.898749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.898795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.899022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.899069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.899251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.899313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.899465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.899511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.899730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.899777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.899955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.900002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.900141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.900189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.900405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.900454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.900601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.444  [2024-12-09 04:16:36.900648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.444  qpair failed and we were unable to recover it.
00:26:08.444  [2024-12-09 04:16:36.900824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.900869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.901047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.901093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.901242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.901298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.901481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.901528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.901745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.901791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.901956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.902002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.902142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.902187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.902393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.902440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.902621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.902666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.902809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.902855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.903038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.903083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.903225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.903283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.903467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.903522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.903763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.903815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.904075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.904126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.904390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.904464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.904671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.904725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.904992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.905044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.905285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.905360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.905527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.905573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.905739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.905784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.906002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.906047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.906261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.906336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.906512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.906558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.906690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.906739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.906914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.906959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.907168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.907217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.907435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.907484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.907694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.907739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.907922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.907970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.908188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.908233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.908434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.908480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.908609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.908653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.445  qpair failed and we were unable to recover it.
00:26:08.445  [2024-12-09 04:16:36.908833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.445  [2024-12-09 04:16:36.908880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.909076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.909128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.909327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.909376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.909594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.909639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.909857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.909902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.910136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.910189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.910445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.910501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.910718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.910763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.910934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.910981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.911172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.911223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.911425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.911493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.911730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.911802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.911989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.912058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.912260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.912316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.912499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.912546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.912729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.912774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.912957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.913003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.913152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.913198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.913434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.913482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.913694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.913740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.913904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.913950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.914132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.914178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.914376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.914424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.914640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.914686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.914831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.914879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.915037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.915082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.915306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.915353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.915522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.915567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.915751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.915795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.915936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.915982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.916155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.916202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.916426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.916472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.916662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.916708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.916847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.916893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.917110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.917156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.917378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.917424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.917579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.917780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.446  [2024-12-09 04:16:36.917827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.446  qpair failed and we were unable to recover it.
00:26:08.446  [2024-12-09 04:16:36.918020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.918066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.918187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.918232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.918437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.918484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.918622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.918669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.918849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.918895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.919136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.919187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.919404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.919451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.919626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.919672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.919824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.919880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.920090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.920142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.920365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.920420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.920638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.920690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.920879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.920954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.921215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.921267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.921510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.921581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.921840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.921911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.922147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.922199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.922412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.922481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.922659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.922710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.922975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.923046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.923293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.923341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.923545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.923590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.923803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.923853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.924048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.924096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.924326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.924376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.924550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.924599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.924756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.924807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.924983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.925031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.925210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.925260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.925476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.925526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.925746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.925794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.925953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.926003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.926205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.926254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.926459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.926507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.926715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.926780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.927021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.927071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.927255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.927331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.447  qpair failed and we were unable to recover it.
00:26:08.447  [2024-12-09 04:16:36.927560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.447  [2024-12-09 04:16:36.927609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.927749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.927798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.927982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.928031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.928227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.928290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.928485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.928534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.928701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.928750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.928941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.928991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.929204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.929253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.929435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.929483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.929712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.929760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.929942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.929990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.930210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.930267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.930491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.930540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.930762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.930812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.931006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.931055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.931222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.931286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.931490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.931538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.931730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.931779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.932020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.932071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.932292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.932362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.932594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.932645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.932820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.932868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.933085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.933137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.933421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.933496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.933706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.933774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.934048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.934100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.934310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.934360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.934559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.934609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.934803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.934852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.935043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.935090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.935294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.935349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.935531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.935581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.935807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.935855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.935993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.936041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.936198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.936247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.936443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.936506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.936690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.936735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.936917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.448  [2024-12-09 04:16:36.936963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.448  qpair failed and we were unable to recover it.
00:26:08.448  [2024-12-09 04:16:36.937163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.937208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.937398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.937446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.937655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.937701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.937885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.937937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.938180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.938232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.938503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.938558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.938723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.938793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.939048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.939100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.939339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.939386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.939544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.939591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.939764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.939809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.939985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.940030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.940209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.940255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.940467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.940523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.940747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.940792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.940911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.940956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.941112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.941158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.941351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.941399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.941575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.941621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.941771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.941818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.942009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.942054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.942203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.942249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.942485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.942531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.942721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.942766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.942950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.942997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.943243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.943326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.943555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.943601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.943810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.943855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.944102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.944153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.944362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.944411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.944627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.944672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.944854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.944901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.945101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.945156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.449  [2024-12-09 04:16:36.945366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.449  [2024-12-09 04:16:36.945413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.449  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.945559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.945604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.945771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.945816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.946006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.946051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.946243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.946316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.946539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.946586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.946775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.946820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.947040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.947086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.947294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.947341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.947561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.947606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.947788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.947833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.948014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.948060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.948282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.948329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.948540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.948586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.948760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.948808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.948997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.949042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.949197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.949243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.949470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.949516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.949701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.949751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.949999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.950050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.950207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.950262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.950522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.950594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.950752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.950818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.951016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.951062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.951294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.951365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.951643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.951713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.951872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.951938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.952111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.952157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.952347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.952395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.952559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.952605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.952807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.952852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.952979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.953024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.953200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.953246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.953446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.953493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.953685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.953740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.953892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.953939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.954135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.954188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.954364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.450  [2024-12-09 04:16:36.954412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.450  qpair failed and we were unable to recover it.
00:26:08.450  [2024-12-09 04:16:36.954584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.954631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.954830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.954877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.955100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.955148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.955312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.955360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.955543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.955589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.955754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.955800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.956009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.956064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.956257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.956324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.956512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.956557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.956789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.956835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.957028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.957086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.957329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.957378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.957575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.957621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.957836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.957881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.958137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.958190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.958437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.958485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.958630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.958677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.958797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.958843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.958978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.959025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.959232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.959321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.959480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.959527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.959691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.959736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.959958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.960013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.960234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.960290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.960483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.960529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.960697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.960743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.960903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.960951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.961160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.961206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.961444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.961493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.961718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.961765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.961938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.961985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.962171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.962217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.962428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.962477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.962622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.962671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.962867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.962917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.963135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.963426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.963473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.451  [2024-12-09 04:16:36.963654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.451  [2024-12-09 04:16:36.963700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.451  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.963905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.963952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.964173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.964228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.964464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.964511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.964673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.964726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.964877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.964924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.965116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.965162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.965388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.965436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.965585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.965633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.965820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.965867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.966047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.966102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.966247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.966315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.966470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.966517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.966675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.966724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.966906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.966952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.967126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.967179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.967372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.967419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.967586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.967634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.967848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.967900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.968064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.968110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.968269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.968327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.968518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.968571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.968728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.968776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.968930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.968976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.969131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.969184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.969357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.969413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.969635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.969683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.969815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.969860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.970034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.970079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.970249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.970313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.970472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.970518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.970672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.970718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.970908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.970954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.971117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.971172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.971364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.971417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.971566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.971611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.971746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.971792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.971973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.972029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.452  [2024-12-09 04:16:36.972196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.452  [2024-12-09 04:16:36.972242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.452  qpair failed and we were unable to recover it.
00:26:08.453  [2024-12-09 04:16:36.972454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.453  [2024-12-09 04:16:36.972507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.453  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.972698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.972745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.972927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.972974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.973171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.973217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.973436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.973483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.973700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.973747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.973935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.973991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.974185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.974232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.974383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.974430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.974580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.974625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.974819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.974866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.975054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.975138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.975302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.975379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.975604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.975650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.975801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.975848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.976000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.976048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.976242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.976312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.976502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.976547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.976695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.976763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.977000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.977046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.977224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.977288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.977462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.977528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.740  [2024-12-09 04:16:36.977716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.740  [2024-12-09 04:16:36.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.740  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.977956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.978003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.978195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.978245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.978444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.978505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.978674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.978728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.978909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.978955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.979103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.979152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.979340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.979389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.979561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.979614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.979835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.979887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.980024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.980077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.980243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.980304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.980489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.980536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.980716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.980765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.980943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.980995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.981187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.981232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.981408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.981454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.981667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.981712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.981861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.981914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.982103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.982150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.982327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.982376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.982593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.982640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.982883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.982934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.983151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.983203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.983446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.983500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.983660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.983706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.983886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.983931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.984085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.984152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.984356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.984404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.984565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.984611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.984829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.984876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.985036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.985082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.985244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.985303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.985437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.985509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.985715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.985759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.985946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.985990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.986149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.986218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.986408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.986452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.741  qpair failed and we were unable to recover it.
00:26:08.741  [2024-12-09 04:16:36.986667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.741  [2024-12-09 04:16:36.986718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.986904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.986948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.987157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.987202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.987416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.987464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.987611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.987656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.987888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.987942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.988148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.988226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.988469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.988545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.988703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.988774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.988948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.989017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.989246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.989334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.989528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.989573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.989704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.989747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.989915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.989958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.990184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.990234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.990406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.990450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.990609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.990653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.990832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.990876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.991047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.991092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.991260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.991317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.991478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.991522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.991680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.991723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.991901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.991945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.992091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.992135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.992341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.992386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.992533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.992577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.992787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.992832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.993019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.993067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.993254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.993314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.993442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.993668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.993713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.993884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.993927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.994130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.994178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.994421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.994487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.994680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.994726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.994875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.994921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.995128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.995171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.995375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.742  [2024-12-09 04:16:36.995419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.742  qpair failed and we were unable to recover it.
00:26:08.742  [2024-12-09 04:16:36.995592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.995636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.995793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.995835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.996007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.996050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.996254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.996308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.996568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.996742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.996785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.996949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.996992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.997129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.997172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.997344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.997388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.997606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.997649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.997854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.997897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.998057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.998099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.998307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.998351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.998504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.998546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.998722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.998765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.998923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.998966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.999106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.999148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.999296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.999340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.999501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.999545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.999737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.999780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:36.999910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:36.999953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.000121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.000165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.000336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.000387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.000575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.000618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.000787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.000830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.000967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.001009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.001167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.001234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.001464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.001511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.001661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.001708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.001849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.001893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.002039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.002080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.002221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.002263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.002455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.002500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.002698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.002741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.002906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.002953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.003096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.003138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.003327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.003372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.003501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.003543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.743  qpair failed and we were unable to recover it.
00:26:08.743  [2024-12-09 04:16:37.003675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.743  [2024-12-09 04:16:37.003717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.003849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.003893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.004041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.004085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.004221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.004265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.004450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.004493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.004616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.004660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.004838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.004883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.005064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.005107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.005246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.005305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.005533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.005598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.005800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.005850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.006066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.006120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.006345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.006392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.006575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.006619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.006795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.006846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.006998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.007050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.007189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.007241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.007455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.007500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.007642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.007687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.007828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.007871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.008017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.008060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.008196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.008240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.008411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.008465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.008665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.008709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.008842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.008895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.009067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.009111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.009294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.009339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.009487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.009533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.009719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.009764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.009908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.009951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.010104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.010151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.010344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.010396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.010566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.010629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.010807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.010874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.744  qpair failed and we were unable to recover it.
00:26:08.744  [2024-12-09 04:16:37.011044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.744  [2024-12-09 04:16:37.011117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.011304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.011370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.011535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.011605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.011800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.011843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.012008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.012052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.012256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.012326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.012500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.012550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.012717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.012769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.012951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.012996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.013130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.013173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.013352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.013397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.013533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.013576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.013726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.013771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.013942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.013987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.014160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.014204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.014356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.014401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.014558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.014602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.014806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.014856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.014988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.015031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.015235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.015298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.015450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.015503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.015679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.015726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.015899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.015942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.016116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.016159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.016346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.016391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.016528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.016570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.016733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.016777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.016984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.017028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.017200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.017244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.017424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.017468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.017610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.017653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.017834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.017883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.018092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.018138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.018313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.018358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.018501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.018544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.745  [2024-12-09 04:16:37.018789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.745  qpair failed and we were unable to recover it.
00:26:08.745  [2024-12-09 04:16:37.019002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.019047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.019218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.019262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.019477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.019522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.019694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.019738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.019952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.019997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.020177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.020221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.020413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.020459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.020632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.020676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.020869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.020916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.020979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1826f30 (9): Bad file descriptor
00:26:08.746  [2024-12-09 04:16:37.021260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.021345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.021497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.021542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.021765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.021810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.021985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.022028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.022202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.022245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.022439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.022485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.022653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.022696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.022863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.022905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.023041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.023084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.023208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.023251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.023431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.023473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.023615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.023658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.023840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.023884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.024056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.024098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.024290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.024336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.024488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.024533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.024672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.024714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.024887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.024931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.025127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.025175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.025368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.025419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.025599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.025643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.025851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.025895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.026065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.026109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.026255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.026313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.026477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.026521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.026699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.026751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.026936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.026981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.027135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.746  [2024-12-09 04:16:37.027194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.746  qpair failed and we were unable to recover it.
00:26:08.746  [2024-12-09 04:16:37.027361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.027403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.027565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.027631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.027863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.027913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.028132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.028174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.028436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.028592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.028635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.028841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.028884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.029026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.029088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.029335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.029378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.029549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.029590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.029764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.029806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.029979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.030022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.030184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.030227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.030380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.030423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.030548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.030589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.030736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.030778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.030950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.030992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.031137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.031178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.031376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.031420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.031552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.031595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.031754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.031793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.031975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.032018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.032177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.032219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.032426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.032603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.032645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.032829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.032875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.033037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.033083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.033288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.033336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.033558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.033612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.033755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.033807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.034004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.034051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.034268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.034346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.034525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.034572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.034767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.034816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.035031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.035078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.035306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.035353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.747  [2024-12-09 04:16:37.035570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.747  [2024-12-09 04:16:37.035618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.747  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.035836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.035889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.036099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.036148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.036395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.036443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.036631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.036677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.036891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.036937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.037186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.037236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.037476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.037527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.037700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.037746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.037939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.037986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.038197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.038247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.038468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.038514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.038720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.038795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.038971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.039021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.039233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.039321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.039525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.039572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.039753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.039801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.039969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.040015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.040188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.040235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.040439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.040487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.040643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.040695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.040881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.040928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.041154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.041197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.041350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.041396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.041543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.041588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.041764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.041809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.041969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.042013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.042226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.042285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.042444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.042509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.042668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.042714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.042883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.042929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.043098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.043143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.043316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.043361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.043548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.043592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.043779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.043822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.043996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.044038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.044187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.044231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.044428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.044482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.748  qpair failed and we were unable to recover it.
00:26:08.748  [2024-12-09 04:16:37.044682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.748  [2024-12-09 04:16:37.044725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.044894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.044938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.045073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.045117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.045309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.045362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.045548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.045592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.045750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.045800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.045985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.046030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.046209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.046254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.046409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.046455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.046665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.046709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.046872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.046916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.047119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.047162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.047329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.047374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.047542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.047585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.047718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.047762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.047906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.047949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.048131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.048174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.048359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.048405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.048555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.048605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.048773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.048825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.048956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.049006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.049173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.049239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.049449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.049494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.049695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.049739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.049916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.049965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.050130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.050176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.050358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.050405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.050615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.050659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.050867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.050910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.051070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.051112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.051301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.051345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.051487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.051531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.749  [2024-12-09 04:16:37.051677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.749  [2024-12-09 04:16:37.051724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.749  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.051938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.051981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.052132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.052185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.052391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.052437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.052656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.052790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.052834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.053035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.053078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.053204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.053248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.053490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.053543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.053725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.053769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.053973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.054016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.054156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.054208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.054381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.054426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.054632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.054682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.054827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.054879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.055018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.055066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.055288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.055333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.055474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.055519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.055692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.055735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.055936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.055979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.056147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.056192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.056384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.056429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.056606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.056650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.056798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.056843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.057019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.057063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.057257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.057344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.057552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.057597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.057794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.057858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.058043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.058091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.058245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.058304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.058488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.058531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.058695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.058738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.058906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.058950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.059080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.059121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.059304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.059351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.059510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.059555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.059718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.059761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.059964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.060009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.750  [2024-12-09 04:16:37.060236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.750  [2024-12-09 04:16:37.060289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.750  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.060423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.060468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.060675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.060719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.060889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.060932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.061108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.061151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.061307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.061351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.061516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.061561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.061748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.061792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.061956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.061996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.062131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.062175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.062331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.062363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.062464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.062493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.062618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.062690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.062989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.063038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.063178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.063219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.063384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.063415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.063627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.063692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.063878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.063942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.064174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.064240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.064402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.064433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.064553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.064603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.064755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.064802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.064987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.065036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.065188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.065217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.065379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.065409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.065546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.065616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.065816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.065857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.066027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.066057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.066193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.066223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.066327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.066356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.066455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.066485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.066650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.066691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.066833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.066874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.067071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.067112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.067265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.067301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.067413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.067441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.067570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.067599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.067724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.751  [2024-12-09 04:16:37.067753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.751  qpair failed and we were unable to recover it.
00:26:08.751  [2024-12-09 04:16:37.067876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.067905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.068031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.068061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.068230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.068288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.068412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.068454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.068619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.068660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.068826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.068867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.069023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.069065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.069241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.069296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.069470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.069511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.069671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.069711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.069866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.069908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.070058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.070098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.070291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.070333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.070502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.070543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.070697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.070736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.070849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.070897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.071019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.071060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.071224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.071264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.071409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.071449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.071660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.071701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.071827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.071868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.072005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.072045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.072211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.072253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.072394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.072434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.072601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.072639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.072801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.072840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.072996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.073034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.073187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.073227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.073426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.073465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.073623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.073662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.073829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.073869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.073996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.074035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.074196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.074234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.074401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.074441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.074629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.074668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.074770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.074806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.074959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.074998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.075155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.752  [2024-12-09 04:16:37.075193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.752  qpair failed and we were unable to recover it.
00:26:08.752  [2024-12-09 04:16:37.075327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.075367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.075521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.075560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.075687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.075727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.075908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.075939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.076055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.076205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.076344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.076467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.076616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.076776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.076899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.076985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.077015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.077124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.077154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.077247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.077292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.077398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.077427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.077562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.077592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.077683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.077712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.077839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.077868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.077978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.078008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.078099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.078130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.078262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.078299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.078425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.078456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.078626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.078655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.078776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.078926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.078956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.079035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.079065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.079181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.079211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.079320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.079352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.079447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.079477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.079577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.079607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.079738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.079769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.079904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.079934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.080025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.080056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.080213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.080244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.080388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.080419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.080547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.080577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.080706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.080736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.080833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.753  [2024-12-09 04:16:37.080863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.753  qpair failed and we were unable to recover it.
00:26:08.753  [2024-12-09 04:16:37.080991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.081021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.081144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.081174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.081258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.081296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.081388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.081419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.081522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.081553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.081705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.081735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.081839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.081874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.082003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.082033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.082124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.082154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.082257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.082297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.082434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.082464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.082557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.082587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.082709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.082739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.082861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.082890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.083007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.083037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.083118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.083147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.083286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.083317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.083406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.083436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.083542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.083572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.083679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.083710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.083847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.083877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.084001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.084030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.084135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.084165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.084253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.084294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.084429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.084458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.084590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.084620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.084737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.084767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.084893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.084924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.085082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.085112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.085238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.085269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.085421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.085450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.085579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.085609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.085697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.754  [2024-12-09 04:16:37.085729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.754  qpair failed and we were unable to recover it.
00:26:08.754  [2024-12-09 04:16:37.085854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.085884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.085989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.086021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.086139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.086169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.086292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.086323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.086420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.086451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.086546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.086576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.086732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.086762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.086893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.086924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.087074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.087104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.087227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.087257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.087407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.087437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.087594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.087623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.087740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.087769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.087888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.087923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.088055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.088085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.088168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.088197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.088338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.088369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.088493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.088524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.088653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.088682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.088801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.088831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.088914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.088943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.089074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.089103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.089264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.089303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.089397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.089425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.089532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.089561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.089656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.089685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.089786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.089816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.089951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.089980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.090137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.090167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.090290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.090320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.090423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.090452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.090545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.090574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.090725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.090753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.090846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.090875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.091034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.091064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.091187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.091216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.091334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.091365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.755  [2024-12-09 04:16:37.091499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.755  [2024-12-09 04:16:37.091528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.755  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.091664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.091694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.091814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.091842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.091969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.091998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.092092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.092123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.092223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.092253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.092393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.092423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.092581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.092611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.092703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.092731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.092834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.092862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.092948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.092977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.093072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.093100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.093205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.093234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.093367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.093398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.093529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.093558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.093714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.093744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.093889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.093922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.094024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.094053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.094181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.094209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.094348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.094377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.094493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.094520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.094645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.094777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.094804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.094890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.094919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.095068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.095096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.095189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.095218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.095346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.095375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.095478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.756  [2024-12-09 04:16:37.095508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.756  qpair failed and we were unable to recover it.
00:26:08.756  [2024-12-09 04:16:37.095593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.095622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.095726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.095754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.095859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.095887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.096038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.096179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.096301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.096431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.096584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.096737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.096863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.096992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.097020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.097104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.097132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.097221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.097249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.097399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.097443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.097579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.097610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.097754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.097782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.097885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.097916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.098025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.098053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.098187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.098215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.098369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.098400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.098490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.098518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.098643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.098671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.098816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.098845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.098965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.098993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.099158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.099220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.099413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.099446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.099541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.099569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.099691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.099720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.099799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.099842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.099945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.099974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.100099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.100128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.100245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.100285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.100408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.100437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.100558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.100586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.100710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.100738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.100887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.100915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.101040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.757  [2024-12-09 04:16:37.101068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.757  qpair failed and we were unable to recover it.
00:26:08.757  [2024-12-09 04:16:37.101183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.101211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.101346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.101373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.101470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.101498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.101646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.101673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.101801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.101830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.101924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.101950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.102043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.102072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.102163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.102191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.102281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.102308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.102433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.102464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.102602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.102631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.102775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.102816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.102911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.102940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.103039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.103067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.103186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.103214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.103338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.103367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.103490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.103518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.103601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.103628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.103750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.103779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.103907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.103935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.104922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.104950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.105073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.105102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.105251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.105288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.105401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.105428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.105553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.105581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.105673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.105700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.105824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.105854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.758  [2024-12-09 04:16:37.105978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.758  [2024-12-09 04:16:37.106005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.758  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.106120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.106146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.106240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.106270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.106436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.106464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.106553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.106582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.106697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.106726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.106843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.106871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.106989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.107017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.107135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.107162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.107284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.107313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.107433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.107461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.107567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.107596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.107721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.107748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.107896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.107923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.108939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.108967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.109090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.109118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.109229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.109256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.109378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.109411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.109525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.109553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.759  [2024-12-09 04:16:37.109669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.759  [2024-12-09 04:16:37.109697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.759  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.109821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.109849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.109966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.109996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.110122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.110150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.110242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.110278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.110369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.110396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.110477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.110506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.110625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.110653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.110772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.110800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.110918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.110946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.111059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.111087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.111201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.111228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.111364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.111392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.111513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.111541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.111627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.111654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.111743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.111770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.111865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.111893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.112037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.112206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.112365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.112482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.112621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.112762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.112901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.112986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.113018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.113101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.113136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.113255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.113292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.113407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.113434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.113586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.113613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.113720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.113746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.113861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.113889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.760  [2024-12-09 04:16:37.114972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.760  [2024-12-09 04:16:37.114999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.760  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.115924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.115951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.116095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.116120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.116197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.116224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.116350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.116378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.116461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.116487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.116628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.116654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.116764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.116798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.116882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.116908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.117908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.117935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.118046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.118073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.118188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.118214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.118334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.118361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.118437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.118464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.118565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.118605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.118759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.761  [2024-12-09 04:16:37.118788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.761  qpair failed and we were unable to recover it.
00:26:08.761  [2024-12-09 04:16:37.118905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.118932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.119045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.119072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.119183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.119210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.119323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.119350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.119444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.119471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.119569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.119596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.119679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.119706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.119825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.119851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.120031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.120198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.120318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.120470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.120619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.120757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.120881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.120992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.121032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.121185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.121214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.121312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.121340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.121424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.121450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.121568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.121594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.121715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.121761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.121875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.121903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.122000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.122028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.122147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.122174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.122289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.122317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.122459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.122486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.122635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.122661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.122770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.122796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.122938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.122966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.123101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.123129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.123251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.123289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.123404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.123431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.123525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.123555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.123643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.123671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.123793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.123823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.123950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.762  [2024-12-09 04:16:37.123978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.762  qpair failed and we were unable to recover it.
00:26:08.762  [2024-12-09 04:16:37.124087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.124114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.124236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.124263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.124357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.124390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.124515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.124541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.124684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.124712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.124834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.124861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.124982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.125102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.125253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.125383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.125533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.125657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.125768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.125945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.125972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.126097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.126124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.126307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.126341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.126487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.126519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.126625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.126657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.126812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.126858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.127010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.127048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.127198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.127233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.127379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.127413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.127563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.127597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.127745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.127776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.127880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.127912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.128053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.128084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.128193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.128332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.128505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.128580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.128718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.128752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.128887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.128929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.129038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.129071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.129202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.129235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.129401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.129434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.129530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.129561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.129730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.129764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.763  [2024-12-09 04:16:37.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.763  [2024-12-09 04:16:37.129935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.763  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.130062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.130094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.130257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.130297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.130457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.130488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.130638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.130703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.130863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.130895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.131030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.131060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.131195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.131227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.131374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.131444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.131602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.131653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.131848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.131879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.132021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.132182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.132213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.132356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.132391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.132494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.132525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.132710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.132741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.132839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.132871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.133005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.133036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.133167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.133198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.133328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.764  [2024-12-09 04:16:37.133360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.764  qpair failed and we were unable to recover it.
00:26:08.764  [2024-12-09 04:16:37.133481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.133511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.133678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.133724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.133889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.133925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.134074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.134106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.134211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.134248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.134355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.134386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.134582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.134639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.134763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.134795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.134931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.134962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.135053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.135086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.135216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.135247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.135401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.135484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.135669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.135739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.135890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.135921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.136078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.136116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.136213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.136245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.136406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.136490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.136673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.136731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.136892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.136924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.137083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.137114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.137242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.137290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.137466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.137524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.137686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.137754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.137912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.137943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.138077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.138109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.138239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.138269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.138378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.138410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.138544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.138576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.138695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.138726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.138857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.138887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.139015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.139046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.139134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.139165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.139297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.139328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.139460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.139491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.139576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.139606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.139732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.139764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.139931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.765  [2024-12-09 04:16:37.139962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.765  qpair failed and we were unable to recover it.
00:26:08.765  [2024-12-09 04:16:37.140069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.140100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.140260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.140301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.140442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.140474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.140577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.140607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.140720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.140767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.140910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.140945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.141052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.141083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.141255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.141295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.141435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.141468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.141625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.141657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.141788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.141819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.141945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.141977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.142139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.142171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.142308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.142342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.142480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.142519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.142610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.142642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.142773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.142807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.142946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.142984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.143085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.143116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.143231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.143265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.143387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.143419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.143513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.143543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.143696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.143727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.143860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.143890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.144018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.144049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.144144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.144179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.144318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.144350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.144444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.144477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.144611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.144644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.144775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.144847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.145044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.145109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.145303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.145336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.145502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.145535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.145633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.145665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.145762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.145794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.145958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.145991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.766  [2024-12-09 04:16:37.146126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.766  [2024-12-09 04:16:37.146158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.766  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.146263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.146303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.146442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.146474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.146614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.146648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.146760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.146791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.146930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.146962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.147094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.147155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.147286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.147319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.147428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.147465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.147562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.147595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.147728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.147765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.147936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.147971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.148112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.148144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.148284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.148320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.148462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.148518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.148752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.148825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.148992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.149049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.149218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.149308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.149440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.149470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.149600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.149630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.149805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.149836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.149983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.150013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.150156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.150191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.150332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.150366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.150499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.150533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.767  [2024-12-09 04:16:37.150655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.767  [2024-12-09 04:16:37.150687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.767  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.150827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.150859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.151001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.151032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.151192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.151222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.151363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.151395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.151529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.151562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.151730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.151765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.151863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.151897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.152028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.152061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.152156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.152185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.152324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.152361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.152499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.152531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.152665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.152697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.152861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.152958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.152990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.153104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.153138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.153286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.153321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.153460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.153495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.153644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.153678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.153820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.153856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.153992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.154026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.154160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.154194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.154302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.154337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.154452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.154492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.154637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.154671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.154806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.154840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.155007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.155041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.155132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.155167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.155310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.155346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.155513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.155546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.155647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.155681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.155851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.155885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.156020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.156054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.156218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.156252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.156373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.156456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.156591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.156664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.156826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.156861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.156975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.768  [2024-12-09 04:16:37.157011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.768  qpair failed and we were unable to recover it.
00:26:08.768  [2024-12-09 04:16:37.157153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.157187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.157325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.157363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.157508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.157542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.157678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.157710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.157826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.157862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.157974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.158007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.158152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.158186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.158316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.158351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.158515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.158548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.158661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.158698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.158817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.158855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.158995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.159029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.159146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.159184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.159328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.159363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.159504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.159539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.159636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.159670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.159810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.159844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.160025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.160059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.160160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.160196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.160312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.160349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.160498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.160531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.160672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.160704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.160838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.160871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.160980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.161014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.161126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.161158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.161270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.161320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.161466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.161499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.161601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.161636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.161734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.161767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.161913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.161947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.162087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.162120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.162266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.769  [2024-12-09 04:16:37.162316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.769  qpair failed and we were unable to recover it.
00:26:08.769  [2024-12-09 04:16:37.162485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.162518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.162653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.162689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.162835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.162871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.163081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.163136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.163299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.163337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.163480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.163520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.163674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.163713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.163910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.163948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.164061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.164099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.164241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.164286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.164470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.164507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.164667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.164705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.164867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.164905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.165056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.165094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.165243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.165293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.165408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.165445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.165629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.165666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.165816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.165854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.166011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.166049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.166233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.166287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.166476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.166514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.166664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.166703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.166860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.166898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.167049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.167086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.167188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.167225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.167367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.167406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.167565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.167605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.167723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.167760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.167899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.167934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.168085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.168122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.168265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.168311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.168420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.168454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.168632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.168667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.168805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.168849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.169001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.169036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.169215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.169267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.169411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.169446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.770  [2024-12-09 04:16:37.169589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.770  [2024-12-09 04:16:37.169625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.770  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.169771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.169806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.169954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.169990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.170135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.170172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.170301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.170338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.170517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.170551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.170702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.170739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.170917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.170954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.171102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.171137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.171288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.171325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.171511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.171548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.171694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.171730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.171852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.171889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.172009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.172050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.172205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.172242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.172376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.172414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.172593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.172632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.172739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.172776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.172897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.172936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.173118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.173156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.173316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.173353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.173479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.173515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.173699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.173734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.173888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.173923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.174029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.174065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.174186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.174221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.174406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.174443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.174597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.174635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.174776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.174813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.174964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.174998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.175160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.175196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.175318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.175360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.175506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.175550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.175701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.175746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.175902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.175945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.176097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.176135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.176261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.176313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.176466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.176505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.771  [2024-12-09 04:16:37.176624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.771  [2024-12-09 04:16:37.176660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.771  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.176775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.176810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.176972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.177007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.177145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.177181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.177331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.177368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.177520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.177560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.177711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.177749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.177896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.177934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.178112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.178150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.178262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.178338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.178494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.178534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.178691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.178732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.178906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.178945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.179060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.179099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.179256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.179310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.179427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.179465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.179627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.179664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.179862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.179897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.179993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.180028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.180174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.180210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.180347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.180388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.180537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.180574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.180728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.180765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.180941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.180979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.181096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.181135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.181265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.181314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.181440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.181478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.181630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.181667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.181812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.181868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.182027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.182069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.182254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.182349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.182525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.182585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.182752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.182791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.182961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.182998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.183147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.183185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.183329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.183385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.183519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.183558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.183670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.183707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.772  qpair failed and we were unable to recover it.
00:26:08.772  [2024-12-09 04:16:37.183861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.772  [2024-12-09 04:16:37.183905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.184069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.184112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.184256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.184313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.184469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.184507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.184660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.184697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.184846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.184884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.185006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.185062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.185250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.185297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.185451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.185491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.185622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.185661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.185821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.185862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.185982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.186021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.186185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.186224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.186365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.186407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.186572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.186611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.186757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.186803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.186926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.186965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.187131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.187173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.187311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.187353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.187507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.187547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.187667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.187707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.187895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.187936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.188084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.188130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.188295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.188336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.188530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.188574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.188731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.188770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.188889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.188928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.189088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.189128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.189321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.189509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.189549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.189768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.189850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.190031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.190073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.190231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.190281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.190411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.190449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.190602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.190640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.190826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.190865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.191003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.191042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.191237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.773  [2024-12-09 04:16:37.191338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.773  qpair failed and we were unable to recover it.
00:26:08.773  [2024-12-09 04:16:37.191497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.191536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.191659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.191698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.191886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.191932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.192118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.192181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.192317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.192356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.192548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.192589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.192727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.192784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.192986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.193028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.193152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.193193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.193366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.193401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.193544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.193598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.193730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.193783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.193983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.194020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.194188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.194222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.194394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.194452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.194583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.194623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.194793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.194849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.194998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.195035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.195171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.195206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.195382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.195451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.195693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.195783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.195966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.196001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.196145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.196179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.196323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.196380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.196564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.196623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.196890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.196955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.197148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.197182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.197432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.197471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.197700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.197766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.197903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.197938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.774  [2024-12-09 04:16:37.198058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.774  [2024-12-09 04:16:37.198094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.774  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.198238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.198280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.198376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.198434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.198645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.198717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.198874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.198909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.199038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.199072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.199215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.199250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.199366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.199399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.199533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.199567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.199741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.199775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.199915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.199949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.200120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.200157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.200328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.200369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.200500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.200533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.200676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.200709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.200852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.200887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.200989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.201023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.201161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.201195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.201346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.201386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.201489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.201525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.201682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.201732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.201857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.201894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.202059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.202092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.202202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.202235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.202399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.202433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.202573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.202605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.202748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.202781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.202920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.202953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.203118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.203151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.203319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.203354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.203469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.203503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.203622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.203655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.203796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.203831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.203978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.204014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.204118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.204151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.204288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.204323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.204472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.204549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.204712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.204745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.775  [2024-12-09 04:16:37.204878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.775  [2024-12-09 04:16:37.204911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.775  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.205086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.205119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.205218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.205250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.205427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.205490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.205621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.205684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.205836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.205869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.205967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.205999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.206101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.206133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.206267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.206306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.206452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.206483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.206587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.206620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.206764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.206794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.206908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.206944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.207058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.207094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.207263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.207312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.207440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.207483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.207626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.207659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.207793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.207825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.207921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.207954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.208084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.208117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.208286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.208324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.208494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.208528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.208633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.208670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.208812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.208845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.208964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.208999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.209116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.209148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.209248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.209285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.209389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.209421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.209561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.209592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.209692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.209726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.209897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.209934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.210105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.210137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.210242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.210285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.210417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.210450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.210589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.210621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.210720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.210753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.210969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.211017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.211162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.211218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.776  qpair failed and we were unable to recover it.
00:26:08.776  [2024-12-09 04:16:37.211375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.776  [2024-12-09 04:16:37.211415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.211545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.211585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.211775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.211813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.211971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.212010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.212172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.212212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.212395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.212442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.212634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.212674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.212830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.212869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.213020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.213060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.213196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.213236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.213407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.213449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.213612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.213650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.213797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.213834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.213956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.213994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.214112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.214150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.214310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.214350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.214486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.214534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.214724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.214764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.214890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.214929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.215114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.215153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.215260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.215307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.215425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.215465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.215627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.215667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.215804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.215842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.215967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.216005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.216119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.216155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.216311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.216351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.216498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.216537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.216682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.216719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.216830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.216869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.217024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.217063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.217211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.217252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.217379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.217419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.217581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.217622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.217801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.217841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.217988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.218027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.218143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.218183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.218338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.777  [2024-12-09 04:16:37.218378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.777  qpair failed and we were unable to recover it.
00:26:08.777  [2024-12-09 04:16:37.218505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.218547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.218700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.218738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.218942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.218982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.219141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.219181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.219301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.219342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.219465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.219508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.219683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.219722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.219846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.219884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.220072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.220111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.220269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.220329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.220451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.220489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.220603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.220642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.220807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.220845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.221035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.221073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.221186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.221224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.221391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.221430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.221538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.221577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.221697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.221736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.221858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.221903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.222048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.222087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.222208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.222250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.222423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.222464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.222620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.222659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.222845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.222890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.223047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.223087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.223251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.223306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.223492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.223533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.223693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.223733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.223865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.223906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.224097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.224138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.224313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.224356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.224561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.224602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.224739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.224782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.224940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.224982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.225149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.778  [2024-12-09 04:16:37.225190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.778  qpair failed and we were unable to recover it.
00:26:08.778  [2024-12-09 04:16:37.225308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.225351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.225546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.225588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.225772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.225816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.225987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.226027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.226219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.226259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.226428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.226467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.226630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.226671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.226864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.226905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.227064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.227104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.227289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.227333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.227513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.227556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.227731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.227772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.227913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.227956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.228115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.228155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.228309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.228349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.228496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.228535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.228721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.228761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.228947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.228987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.229149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.229200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.229370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.229417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.229591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.229632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.229762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.229803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.229978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.230019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.230160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.230208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.230390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.230434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.230588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.230629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.230756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.230795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.230993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.231033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.231195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.231236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.231405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.231447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.231575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.231617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.231780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.231821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.232015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.232055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.232229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.232285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.232467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.779  [2024-12-09 04:16:37.232510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.779  qpair failed and we were unable to recover it.
00:26:08.779  [2024-12-09 04:16:37.232639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.232680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.232873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.232914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.233078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.233125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.233256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.233310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.233454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.233498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.233636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.233679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.233843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.233884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.234037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.234078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.234225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.234265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.234440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.234482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.234649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.234693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.234860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.234902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.235095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.235140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.235338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.235381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.235535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.235577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.235713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.235754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.235918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.235960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.236078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.236119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.236305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.236348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.236511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.236553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.236723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.236769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.236938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.236979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.237144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.237187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.237324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.237402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.237657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.237722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.237961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.238006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.238200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.238241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.238381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.238422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.238546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.238588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.238760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.238801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.238990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.239031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.239188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.239228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.239409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.239450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.239609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.239649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.239786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.239830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.239992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.240034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.240197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.240240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.240437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.780  [2024-12-09 04:16:37.240479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.780  qpair failed and we were unable to recover it.
00:26:08.780  [2024-12-09 04:16:37.240606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.240656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.240855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.240897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.241075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.241119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.241293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.241336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.241508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.241549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.241676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.241716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.241879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.241920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.242058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.242098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.242233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.242289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.242453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.242494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.242652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.242693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.242820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.242860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.242994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.243034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.243182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.243222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.243414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.243484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.243675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.243719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.243893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.243937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.244063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.244115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.244310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.244355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.244525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.244567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.244749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.244793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.244978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.245027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.245239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.245332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.245466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.245510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.245642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.245686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.245863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.245906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.246078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.246122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.246257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.246312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.246475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.246516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.246645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.246686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.246856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.246899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.247076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.247118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.247296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.247341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.247487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.247530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.247730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.247773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.247947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.247990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.248122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.248187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.781  [2024-12-09 04:16:37.248352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.781  [2024-12-09 04:16:37.248395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.781  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.248607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.248817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.248860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.249035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.249075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.249237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.249288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.249485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.249525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.249688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.249729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.249890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.249953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.250133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.250177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.250309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.250353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.250551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.250601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.250772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.250816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.250985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.251027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.251163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.251207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.251351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.251395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.251535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.251581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.251755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.251801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.251977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.252025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.252203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.252247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.252403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.252447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.252617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.252669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.252812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.252855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.253033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.253076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.253257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.253311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.253495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.253540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.253663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.253707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.253909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.253952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.254159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.254201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.254390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.254435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.254602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.254645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.254806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.254859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.255030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.255082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.255266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.255321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.255493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.255535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.255732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.255780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.255954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.255998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.256171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.256214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.256433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.256478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.782  [2024-12-09 04:16:37.256683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.782  [2024-12-09 04:16:37.256726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.782  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.256895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.256939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.257101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.257144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.257306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.257387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.257552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.257626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.257804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.257871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.258091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.258155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.258378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.258444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.258581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.258653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.258802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.258845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.259006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.259048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.259168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.259211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.259393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.259437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.259604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.259646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.259815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.259858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.260069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.260110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.260245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.260302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.260438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.260483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.260638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.260682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.260846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.260888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.261054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.261097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.261260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.261316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.261479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.261529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.261694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.261738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.261905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.261947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.262089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.262131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.262253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.262312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.262438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.262480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.262648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.262690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.262896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.262939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.263143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.263185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.263379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.263422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.263579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.263621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.783  qpair failed and we were unable to recover it.
00:26:08.783  [2024-12-09 04:16:37.263762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.783  [2024-12-09 04:16:37.263804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.263922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.263965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.264166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.264208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.264362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.264407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.264574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.264617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.264746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.264801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.264944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.264990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.265135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.265179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.265352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.265397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.265535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.265580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.265719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.265764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.265892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.265935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.266098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.266141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.266342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.266395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.266568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.266611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.266785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.266828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.266985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.267030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.267238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.267295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.267465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.267509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.267709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.267756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.267957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.268006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.268199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.268244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.268434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.268480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.268611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.268657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.268834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.268879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.269060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.269115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.269308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.269356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.269581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.269628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.269810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.269859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.270053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.270108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.270293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.270342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.270509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.270561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.270764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.270814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.270995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.271042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.271233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.271310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.271463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.271506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.271709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.271751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.271913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.271956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.272095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.272138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.272266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.272322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.272448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.784  [2024-12-09 04:16:37.272491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.784  qpair failed and we were unable to recover it.
00:26:08.784  [2024-12-09 04:16:37.272624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.272670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.272839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.272883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.273069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.273113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.273315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.273359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.273527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.273570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.273696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.273738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.273937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.273979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.274130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.274173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.274345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.274389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.274537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.274598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.274743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.274789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.274925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.274970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.275103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.275151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.275330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.275395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.275557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.275601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.275813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.275857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.275982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.276026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.276229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.276284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.276459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.276503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.276714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.276777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.277033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.277097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.277295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.277341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.277483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.277528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.277728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.277771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.277929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.277976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.278151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.278197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.278364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.278413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.278560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.278786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.278838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.279020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.279065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.279203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.279249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.279440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.279486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.279666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.279712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.279886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.279931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.280114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.280161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.280311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.280390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.280568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.280646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.280846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.280893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.281038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.281084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.281255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.281346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.281545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.281595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.785  qpair failed and we were unable to recover it.
00:26:08.785  [2024-12-09 04:16:37.281781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.785  [2024-12-09 04:16:37.281830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.282027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.282076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.282266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.282328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.282505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.282552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.282710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.282758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.282964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.283009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.283146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.283192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.283388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.283437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.283593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.283640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.283876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.283922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.284066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.284111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.284259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.284317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.284507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.284554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.284731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.284777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.284963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.285010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.285158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.285204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.285360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.285443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.285616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.285662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.285796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.285842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.286011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.286056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.286234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.286294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.286473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.286518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.286692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.286738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.286869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.286914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.287048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.287093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.287236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.287298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.287452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.287498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.287655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.287707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:08.786  [2024-12-09 04:16:37.287854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:08.786  [2024-12-09 04:16:37.287900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:08.786  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.288038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.288085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.288255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.288313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.288522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.288567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.288747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.288793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.288974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.289019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.289196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.289241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.289422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.289505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.289691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.289737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.289894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.289965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.290157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.290206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.290378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.290438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.290646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.290693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.290897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.290944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.291124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.291171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.291353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.291401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.291552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.291605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.065  [2024-12-09 04:16:37.291789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.065  [2024-12-09 04:16:37.291835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.065  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.292020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.292073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.292210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.292265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.292456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.292502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.292633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.292678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.292856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.292902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.293084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.293129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.293293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.293344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.293516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.293563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.293749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.293794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.293982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.294152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.294197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.294401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.294447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.294639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.294685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.294902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.294947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.295087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.295132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.295327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.295373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.295541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.295586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.295738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.295784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.295965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.296011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.296159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.296205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.296378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.296425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.296557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.296610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.296750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.296795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.296970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.297015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.297161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.297207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.297409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.297455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.297604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.297648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.297772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.297818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.297994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.298039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.298215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.298260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.298410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.298456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.298594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.298638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.298770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.298815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.298979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.299025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.299194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.299238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.299448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.299494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.299645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.066  [2024-12-09 04:16:37.299690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.066  qpair failed and we were unable to recover it.
00:26:09.066  [2024-12-09 04:16:37.299818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.299862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.299993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.300040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.300215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.300260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.300430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.300474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.300656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.300700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.300870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.300916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.301046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.301091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.301290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.301336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.301469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.301513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.301686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.301730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.301879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.301923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.302158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.302227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.302444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.302494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.302681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.302731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.302875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.302924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.303139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.303185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.303351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.303400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.303573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.303619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.303798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.303846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.304013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.304059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.304286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.304332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.304488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.304533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.304708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.304756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.304970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.305016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.305163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.305221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.305422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.305470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.305632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.305678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.305848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.305894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.306053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.306098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.306330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.306381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.306597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.306646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.306831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.306880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.307064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.307113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.307334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.307384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.307539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.307593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.307825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.308028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.308077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.308236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.067  [2024-12-09 04:16:37.308295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.067  qpair failed and we were unable to recover it.
00:26:09.067  [2024-12-09 04:16:37.308523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.308571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.308721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.308768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.308981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.309026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.309176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.309222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.309372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.309420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.309606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.309652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.309801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.309847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.310021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.310072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.310298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.310349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.310540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.310591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.310822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.311056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.311104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.311366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.311439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.311668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.311722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.311925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.311975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.312136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.312184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.312412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.312462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.312638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.312690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.312911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.312959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.313151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.313201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.313383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.313433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.313624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.313672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.313815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.313865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.314019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.314068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.314227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.314292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.314434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.314483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.314641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.314698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.314878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.314927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.315102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.315150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.315345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.315395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.315579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.315627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.315850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.315899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.316088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.316137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.316333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.316383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.316558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.316605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.316799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.316847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.317029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.317077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.317258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.068  [2024-12-09 04:16:37.317318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.068  qpair failed and we were unable to recover it.
00:26:09.068  [2024-12-09 04:16:37.317474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.317523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.317742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.317790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.317995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.318043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.318231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.318306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.318546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.318593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.318779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.318829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.319020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.319069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.319298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.319348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.319504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.319553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.319694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.319741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.319867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.319915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.320115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.320162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.320389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.320439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.320638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.320694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.320879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.320928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.321159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.321207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.321420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.321471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.321672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.321720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.321941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.321988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.322214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.322261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.322481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.322531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.322719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.322768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.322951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.323001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.323159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.323207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.323379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.323428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.323582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.323629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.323827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.323873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.324037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.324085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.324301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.324359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.324510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.324559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.324709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.324755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.324950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.324997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.325222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.325270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.325488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.325536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.325693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.325742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.325932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.325979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.326201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.326249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.326436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.069  [2024-12-09 04:16:37.326487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.069  qpair failed and we were unable to recover it.
00:26:09.069  [2024-12-09 04:16:37.326689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.326737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.326941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.326992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.327175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.327226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.327412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.327465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.327687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.327738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.327982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.328034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.328196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.328243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.328488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.328536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.328753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.328801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.329043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.329090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.329354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.329407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.329561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.329612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.329818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.329869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.330035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.330085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.330250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.330314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.330512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.330562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.330763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.330817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.331016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.331068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.331224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.331304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.331505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.331556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.331788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.331840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.332041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.332091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.332300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.332354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.332566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.332617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.332829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.332879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.333040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.333091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.333301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.333363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.333602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.333653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.333829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.333879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.334038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.334088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.334295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.334359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.334599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.070  [2024-12-09 04:16:37.334650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.070  qpair failed and we were unable to recover it.
00:26:09.070  [2024-12-09 04:16:37.334813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.334863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.335016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.335068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.335285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.335338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.335585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.335635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.335834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.335885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.336115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.336172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.336329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.336380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.336569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.336620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.336791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.336842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.336985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.337035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.337286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.337339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.337590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.337641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.337844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.337895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.338085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.338142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.338322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.338377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.338556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.338608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.338759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.338810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.339041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.339093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.339241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.339307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.339514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.339565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.339810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.339861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.340023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.340077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.340290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.340342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.340551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.340602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.340796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.340847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.341010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.341062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.341248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.341311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.341524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.341579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.341789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.341844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.342046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.342101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.342356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.342414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.342666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.342721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.342975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.343026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.343228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.343292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.343510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.343563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.343776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.343827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.344017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.344068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.071  [2024-12-09 04:16:37.344233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.071  [2024-12-09 04:16:37.344296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.071  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.344496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.344556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.344762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.344814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.345061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.345299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.345351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.345506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.345556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.345730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.345782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.345988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.346039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.346259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.346342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.346548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.346599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.346804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.346857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.347107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.347159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.347356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.347408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.347564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.347615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.347811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.347862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.348040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.348091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.348250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.348313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.348548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.348599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.348802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.348852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.349074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.349125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.349318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.349374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.349626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.349680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.349928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.349982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.350205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.350260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.350489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.350543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.350748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.350802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.351024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.351078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.351296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.351352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.351533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.351599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.351820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.351875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.352079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.352135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.352340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.352397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.352613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.352670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.352849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.352903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.353055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.353109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.353341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.353398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.353615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.353670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.353876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.353931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.072  [2024-12-09 04:16:37.354180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.072  [2024-12-09 04:16:37.354233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.072  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.354450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.354507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.354712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.354769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.355027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.355082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.355343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.355399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.355617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.355674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.355847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.355908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.356112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.356166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.356421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.356476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.356682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.356737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.356986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.357050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.357292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.357371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.357572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.357627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.357789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.357845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.358024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.358078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.358334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.358393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.358648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.358703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.358938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.358995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.359259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.359327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.359577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.359632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.359814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.359868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.360108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.360162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.360381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.360438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.360663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.360717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.360948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.361002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.361249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.361316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.361580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.361634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.361800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.361855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.362053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.362115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.362381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.362440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.362679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.362748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.362939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.363003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.363286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.363342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.363521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.363575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.363827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.363882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.364104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.364160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.364400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.364456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.364653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.364712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.073  [2024-12-09 04:16:37.364925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.073  [2024-12-09 04:16:37.364989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.073  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.365151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.365206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.365381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.365434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.365634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.365689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.365873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.365934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.366166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.366220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.366491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.366553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.366781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.366840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.367105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.367163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.367438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.367497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.367698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.367753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.367967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.368021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.368220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.368294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.368507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.368562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.368730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.368787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.368996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.369051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.369299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.369381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.369558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.369637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.369899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.369963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.370292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.370367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.370587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.370643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.370903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.370957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.371203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.371257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.371476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.371531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.371742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.371797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.371964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.372018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.372235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.372314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.372531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.372586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.372839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.372893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.373089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.373150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.373415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.373476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.373741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.373800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.373999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.374068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.374302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.074  [2024-12-09 04:16:37.374362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.074  qpair failed and we were unable to recover it.
00:26:09.074  [2024-12-09 04:16:37.374622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.374681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.374945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.375002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.375262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.375332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.375554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.375613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.375943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.376165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.376224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.376467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.376526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.376687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.376745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.376945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.377004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.377244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.377322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.377547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.377605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.377887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.377946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.378174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.378233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.378458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.378519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.378746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.378805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.379076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.379135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.379352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.379412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.379586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.379646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.379837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.379902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.380174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.380233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.380505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.380567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.380802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.381104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.381163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.381392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.381453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.381685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.381744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.381957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.382018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.382248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.382324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.382548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.382607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.382794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.382853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.383118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.383179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.383389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.383449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.383671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.383731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.383958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.384018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.384246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.384323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.384553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.384612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.384799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.384859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.385082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.075  [2024-12-09 04:16:37.385141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.075  qpair failed and we were unable to recover it.
00:26:09.075  [2024-12-09 04:16:37.385365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.385426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.385698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.385772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.386013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.386077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.386292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.386358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.386608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.386672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.386959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.387018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.387293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.387355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.387586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.387645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.387901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.387959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.388147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.388207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.388452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.388513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.388740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.388798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.389024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.389084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.389269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.389355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.389573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.389632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.389910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.389969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.390234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.390309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.390630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.390859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.390918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.391084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.391142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.391383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.391445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.391707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.391767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.392009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.392070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.392267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.392340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.392617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.392676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.392866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.392924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.393120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.393181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.393475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.393541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.393854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.393917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.394208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.394305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.394561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.394626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.394905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.394968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.395162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.395226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.395486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.395551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.395790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.395855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.396104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.396166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.396478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.396544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.396769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.076  [2024-12-09 04:16:37.396836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.076  qpair failed and we were unable to recover it.
00:26:09.076  [2024-12-09 04:16:37.397121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.397185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.397437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.397503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.397785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.397850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.398024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.398098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.398382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.398448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.398685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.398749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.398992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.399055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.399249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.399332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.399586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.399650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.399940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.400308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.400373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.400623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.400686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.400931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.400996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.401245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.401324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.401580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.401645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.401852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.401917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.402150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.402214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.402554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.402651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.402920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.402988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.403240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.403326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.403537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.403601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.403851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.403919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.404154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.404218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.404449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.404521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.404815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.404879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.405124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.405188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.405487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.405553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.405835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.405899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.406158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.406222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.406478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.406542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.406839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.406903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.407200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.407264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.407527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.407595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.407881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.407944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.408203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.408267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.408547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.408612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.408896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.408959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.409202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.077  [2024-12-09 04:16:37.409266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.077  qpair failed and we were unable to recover it.
00:26:09.077  [2024-12-09 04:16:37.409575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.409639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.409908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.409976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.410224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.410309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.410594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.410658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.410959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.411023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.411266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.411372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.411659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.411723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.411979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.412043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.412270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.412354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.412613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.412676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.412917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.412984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.413224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.413305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.413498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.413563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.413826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.413890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.414140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.414203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.414469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.414819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.414883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.415096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.415159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.415440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.415507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.415803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.415867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.416059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.416125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.416414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.416481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.416698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.416762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.417022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.417086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.417372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.417438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.417686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.417750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.418004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.418071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.418316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.418380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.418642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.418705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.418946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.419009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.419301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.419365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.419570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.419637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.419877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.419944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.420172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.420235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.420558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.420622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.420857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.420923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.421213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.421297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.421520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.078  [2024-12-09 04:16:37.421587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.078  qpair failed and we were unable to recover it.
00:26:09.078  [2024-12-09 04:16:37.421841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.421906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.422164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.422235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.422441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.422505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.422692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.422756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.422997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.423060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.423331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.423396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.423605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.423669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.423911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.423987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.424217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.424301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.424581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.424645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.424829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.424893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.425139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.425202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.425453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.425518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.425767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.425830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.426068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.426130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.426415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.426682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.426744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.427027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.427091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.427348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.427413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.427705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.428048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.428111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.428372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.428438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.428695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.428758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.429007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.429069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.429254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.429333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.429579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.429643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.429933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.429999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.430303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.430370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.430669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.430732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.431016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.431087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.431314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.431381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.431619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.431685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.431975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.432049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.432305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.079  [2024-12-09 04:16:37.432371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.079  qpair failed and we were unable to recover it.
00:26:09.079  [2024-12-09 04:16:37.432617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.432683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.432926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.432991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.433231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.433315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.433604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.433667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.433912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.433976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.434223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.434304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.434562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.434625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.434808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.434874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.435130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.435195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.435457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.435522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.435711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.435774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.435970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.436034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.436298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.436363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.436599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.436674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.436916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.436979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.437236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.437320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.437514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.437577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.437739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.437802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.438046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.438111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.438353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.438420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.438692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.438759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.439002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.439068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.439330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.439395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.439654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.439723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.439975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.440042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.440241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.440322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.440567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.440639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.440888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.440954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.441239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.441320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.441609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.441672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.441918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.441981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.442181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.442243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.442476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.442540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.442792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.442855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.443089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.443153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 345207 Killed                  "${NVMF_APP[@]}" "$@"
00:26:09.080  [2024-12-09 04:16:37.443361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.443428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.443679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080  [2024-12-09 04:16:37.443743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.080  qpair failed and we were unable to recover it.
00:26:09.080  [2024-12-09 04:16:37.443952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.080   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2
00:26:09.081  [2024-12-09 04:16:37.444016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.444301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.444366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.444642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.444739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.444942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.445010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:09.081  [2024-12-09 04:16:37.445307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.081  [2024-12-09 04:16:37.445373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.445626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.445691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.445976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.446040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.446250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.446347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.446593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.446657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.446911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.446973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.447212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.447296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.447598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.447663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.447953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.448015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.448308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.448384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.448653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.448718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.449012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.449076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.449342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.449407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.449601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.449665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=345755
00:26:09.081  [2024-12-09 04:16:37.449947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0
00:26:09.081  [2024-12-09 04:16:37.450012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 345755
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.450218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 345755 ']'
00:26:09.081  [2024-12-09 04:16:37.450295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:09.081  [2024-12-09 04:16:37.450594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.450658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:09.081  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:09.081  [2024-12-09 04:16:37.450953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.451017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.451335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.451399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.451669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.451730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.451976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.452037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.452347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.452645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.452708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.452891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.452955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.453210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.453290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.453585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.453650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.453844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.453907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.081  [2024-12-09 04:16:37.454117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.081  [2024-12-09 04:16:37.454181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.081  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.454415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.454480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.454777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.455017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.455080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.455353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.455418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.455635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.455712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.455959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.456026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.456305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.456379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.456601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.456664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.456871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.456935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.457186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.457249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.457485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.457548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.457787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.457851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.458099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.458163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.458364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.458428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.458640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.458703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.458943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.459005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.459250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.459344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.459561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.459624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.459887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.459951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.460198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.460262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.460540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.460608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.460807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.460871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.461110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.461173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.461490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.461559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.461857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.461922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.462171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.462235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.462553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.462617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.462822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.462885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.463131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.463193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.463480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.463708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.463774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.463994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.464069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.464357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.464423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.464669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.464732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.465019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.465081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.465301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.465366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.465604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.465670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.465951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.082  [2024-12-09 04:16:37.466015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.082  qpair failed and we were unable to recover it.
00:26:09.082  [2024-12-09 04:16:37.466301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.466365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.466572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.466636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.466888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.466951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.467149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.467211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.467448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.467513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.467793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.467856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.468105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.468167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.468448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.468512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.468806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.468869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.469133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.469196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.469460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.469525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.469704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.469768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.470051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.470114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.470357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.470421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.470615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.470681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.470927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.470992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.471306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.471371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.471652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.471715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.471944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.472008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.472294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.472358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.472602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.472665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.472908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.472971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.473218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.473297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.473582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.473644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.473866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.473931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.474146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.474209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.474473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.474538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.474830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.474894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.475174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.475237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.475516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.475580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.475824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.475887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.476153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.476226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.476415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.476478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.476761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.476825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.083  qpair failed and we were unable to recover it.
00:26:09.083  [2024-12-09 04:16:37.477082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.083  [2024-12-09 04:16:37.477146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.477362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.477427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.477677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.477739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.477973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.478036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.478335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.478400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.478661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.478723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.478928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.478991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.479255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.479360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.479620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.479683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.479931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.479994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.480233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.480323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.480562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.480625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.480927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.480990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.481246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.481335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.481614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.481678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.481963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.482027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.482240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.482323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.482567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.482631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.482939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.483002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.483220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.483300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.483552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.483625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.483871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.483934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.484174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.484237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.484497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.484560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.484844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.484907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.485192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.485254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.485534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.485598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.485853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.485926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.486193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.486256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.486537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.486601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.486850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.486913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.487159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.487221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.487498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.487563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.487829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.487893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.488148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.488212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.488522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.488806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.488869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.489172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.489235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.084  [2024-12-09 04:16:37.489495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.084  [2024-12-09 04:16:37.489558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.084  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.489813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.489882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.490119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.490184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.490506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.490570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.490782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.490846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.491069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.491137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.491335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.491400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.491609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.491672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.491911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.491972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.492226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.492301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.492513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.492576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.492811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.492873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.493155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.493218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.493520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.493594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.493813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.493878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.494160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.494224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.494542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.494616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.494834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.494900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.495187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.495251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.495564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.495627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.495887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.495950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.496235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.496316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.496571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.496633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.496816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.496878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.497140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.497204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.497406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.497470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.497759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.497822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.498024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.498087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.498369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.498434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.498694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.498757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.499054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.499117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.499404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.499467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.499691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.499755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.500042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.500104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.500399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.500464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.500721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.500784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.501003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.501065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.501310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.085  [2024-12-09 04:16:37.501374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.085  qpair failed and we were unable to recover it.
00:26:09.085  [2024-12-09 04:16:37.501635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.501698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.501940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.502002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.502207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.502269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.502597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.502660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.502857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.502923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.503210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.503304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.503555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.503618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.503885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.503948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  [2024-12-09 04:16:37.503961] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.504040] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:09.086  [2024-12-09 04:16:37.504142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.504202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.504470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.504531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.504723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.504784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.505005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.505065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.505351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.505413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.505660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.505724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.505961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.506024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.506266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.506343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.506610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.506673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.506920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.506984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.507291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.507355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.507643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.507706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.507930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.507994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.508208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.508270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.508569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.508633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.508838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.508903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.509196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.509258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.509582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.509645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.509956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.510231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.510327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.510612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.510675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.510919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.510983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.511263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.511346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.511531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.511605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.511888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.511951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.512207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.512270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.512501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.512563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.512835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.512898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.513156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.513220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.086  qpair failed and we were unable to recover it.
00:26:09.086  [2024-12-09 04:16:37.513474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.086  [2024-12-09 04:16:37.513539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.513836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.513900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.514096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.514170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.514408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.514472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.514766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.514829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.515101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.515165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.515467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.515532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.515786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.515850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.516104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.516167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.516413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.516478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.516720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.516783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.516989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.517051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.517329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.517395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.517632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.517696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.517907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.517971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.518234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.518327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.518581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.518646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.518885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.518947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.519229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.519307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.519561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.519628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.519924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.519986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.520195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.520258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.520566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.520630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.520867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.520931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.521163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.521225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.521489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.521553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.521783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.521846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.522084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.522147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.522394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.522459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.522698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.522761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.523011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.523074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.523327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.523392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.523645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.523708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.524003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.524066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.524349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.524414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.524714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.524778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.525061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.525124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.525373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.525439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.087  [2024-12-09 04:16:37.525734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.087  [2024-12-09 04:16:37.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.087  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.526081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.526144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.526392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.526457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.526711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.526774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.527011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.527075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.527311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.527375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.527582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.527645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.527894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.527958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.528224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.528300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.528557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.528621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.528898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.528962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.529232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.529308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.529534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.529598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.529881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.529946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.530188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.530251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.530537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.530600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.530883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.530947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.531161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.531226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.531500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.531564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.531763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.531826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.532109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.532173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.532471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.532537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.532778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.532841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.533137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.533201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.533428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.533504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.533765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.533828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.534034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.534098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.534334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.534400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.534640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.534703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.534893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.534956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.535190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.535254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.535480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.535545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.535794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.535858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.536066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.536130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.088  [2024-12-09 04:16:37.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.088  [2024-12-09 04:16:37.536477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.088  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.536694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.536758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.537045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.537107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.537354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.537418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.537715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.537778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.538023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.538086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.538337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.538403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.538623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.538686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.538973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.539035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.539291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.539356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.539598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.539676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.539942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.540016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.540231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.540324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.540553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.540620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.540901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.540963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.541248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.541328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.541588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.541614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.541731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.541760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.541855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.541881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.541994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.542104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.542244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.542378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.542497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.542614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.542745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.542887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.542912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.543962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.543987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.544099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.544124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.544213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.544239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.544360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.544385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.089  [2024-12-09 04:16:37.544489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.089  [2024-12-09 04:16:37.544514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.089  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.544637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.544663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.544771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.544881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.544907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.545897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.545922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.546959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.546984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.547084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.547123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.547279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.547307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.547400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.547427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.547505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.547531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.547644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.547669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.547783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.547809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.547899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.547925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.548884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.548910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.549028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.549053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.549172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.549198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.549305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.549332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.549442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.090  [2024-12-09 04:16:37.549467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.090  qpair failed and we were unable to recover it.
00:26:09.090  [2024-12-09 04:16:37.549572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.549598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.549687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.549713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.549798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.549823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.549908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.549933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.550073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.550099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.550177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.550202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.550305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.550331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.550475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.550501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.550638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.550676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.550797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.550824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.550941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.550968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.551075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.551208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.551369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.551481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.551620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.551773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.551890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.551978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.552097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.552263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.552369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.552491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.552662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.552801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.552914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.552940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.553961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.553987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.554100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.554127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.554213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.554240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.554337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.554364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.554477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.091  [2024-12-09 04:16:37.554503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.091  qpair failed and we were unable to recover it.
00:26:09.091  [2024-12-09 04:16:37.554640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.554665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.554776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.554801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.554914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.554940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.555967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.555996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.556085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.556111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.556217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.556243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.556334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.556360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.556471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.556497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.556635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.556660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.556775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.556800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.556922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.556947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.557031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.557058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.557176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.557201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.557356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.557382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.557471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.557498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.557620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.557647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.557760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.557786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.557900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.557926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.558908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.558983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.559008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.559121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.559146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.559242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.559269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.092  qpair failed and we were unable to recover it.
00:26:09.092  [2024-12-09 04:16:37.559359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.092  [2024-12-09 04:16:37.559384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.559500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.559530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.559619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.559644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.559760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.559785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.559869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.559894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.559983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.560914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.560991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.561969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.561994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.562112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.562137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.562252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.562288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.562374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.562399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.562513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.562538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.562656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.562682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.562769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.562795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.562902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.562933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.563048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.563183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.563300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.563416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.563517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.563626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.093  [2024-12-09 04:16:37.563769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.093  qpair failed and we were unable to recover it.
00:26:09.093  [2024-12-09 04:16:37.563914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.563939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.564900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.564926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.565939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.565964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.566057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.566085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.566163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.566189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.566317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.566356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.566476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.566504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.566620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.566648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.566763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.566790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.566913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.566939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.567929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.567957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.568039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.568070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.568155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.568181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.568297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.568323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.568431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.568457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.568573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.568601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.568708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.094  [2024-12-09 04:16:37.568735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.094  qpair failed and we were unable to recover it.
00:26:09.094  [2024-12-09 04:16:37.568842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.568867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.568976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.569080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.569193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.569315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.569449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.569772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.569907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.569933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.570902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.570929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.571092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.571214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.571352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.571458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.571601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.571711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.571875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.571990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.572894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.572999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.573024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.573100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.573127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.573216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.573243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.573353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.573383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.573486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.573512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.095  qpair failed and we were unable to recover it.
00:26:09.095  [2024-12-09 04:16:37.573604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.095  [2024-12-09 04:16:37.573639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.573756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.573789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.573902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.573935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.574059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.574196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.574329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.574462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.574590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.574753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.574879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.574976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.575927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.575961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.576932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.576959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.577096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.577213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.577336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.577480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.577614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.577719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.577855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.577993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.578018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.578097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.578124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.578237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.578264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.578353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.578379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.096  qpair failed and we were unable to recover it.
00:26:09.096  [2024-12-09 04:16:37.578525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.096  [2024-12-09 04:16:37.578551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.578632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.578657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.578739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.578764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.578845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.578871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.578950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.578976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.579062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.579088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.579197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.579222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.579321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.579349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.579489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.579515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.579622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.579647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.579791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.579816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.579891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.579916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.580019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.580045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.580132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.580163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.580324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.580364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.580481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.580509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.580626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.580653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.580738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.580765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.580903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.580930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.581878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:09.097  [2024-12-09 04:16:37.581945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.581975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.582089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.582115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.582230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.582256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.582344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.582371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.582459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.582484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.582568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.582593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.582674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.582701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.097  qpair failed and we were unable to recover it.
00:26:09.097  [2024-12-09 04:16:37.582795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.097  [2024-12-09 04:16:37.582821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.582959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.582984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.583099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.583125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.583204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.583229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.583327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.583356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.583466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.583493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.583604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.583630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.583748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.583774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.583899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.583938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.584893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.584921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.585036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.585064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.585174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.585200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.585315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.585341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.585483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.585510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.585623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.585648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.585761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.585788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.585901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.585927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.586009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.586035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.586176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.586201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.586295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.586325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.586415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.586441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.586584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.586610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.586746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.586773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.586862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.586889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.587029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.587056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.587197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.587224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.587326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.587353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.587475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.587501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.587615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.587641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.587726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.587752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.587892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.587917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.098  qpair failed and we were unable to recover it.
00:26:09.098  [2024-12-09 04:16:37.588033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.098  [2024-12-09 04:16:37.588060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.588154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.588179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.588298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.588326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.588439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.588466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.588579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.588605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.588688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.588713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.588801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.588827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.588909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.588935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.589899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.589926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.590944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.590969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.591078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.591105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.591219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.591244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.591368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.591395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.591508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.591534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.591615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.591641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.591735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.591761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.591867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.591892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.099  [2024-12-09 04:16:37.592950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.099  [2024-12-09 04:16:37.592977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.099  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.593935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.593962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.594087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.594128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.594247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.594286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.594414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.594447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.594538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.594565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.594680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.594707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.594785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.594811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.594890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.594916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.595090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.595236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.595379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.595489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.595624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.595763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.595900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.595990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.596108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.596230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.596382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.596495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.596659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.596807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.596920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.596947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.100  [2024-12-09 04:16:37.597901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.100  [2024-12-09 04:16:37.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.100  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.598927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.598953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.599969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.599995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.600072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.600099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.600209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.600235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.600357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.600386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.600500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.600527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.600670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.600696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.600784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.600809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.600922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.600947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.601062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.601088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.601227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.601253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.601362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.601401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.601520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.601554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.601676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.601705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.601793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.601820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.601963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.601990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.602098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.602208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.602332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.602451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.602563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.602735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.602902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.101  [2024-12-09 04:16:37.602980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.101  [2024-12-09 04:16:37.603007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.101  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.603118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.603144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.603255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.603291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.603420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.603446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.603561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.603588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.603728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.603754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.603845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.603873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.603984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.604872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.604992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.605107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.605238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.605351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.605487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.605648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.605808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.605918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.605945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.606032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.606058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.606199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.606227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.606320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.606346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.606490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.606515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.606601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.606627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.606748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.606774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.606882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.606918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.607036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.607062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.607144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.607170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.607252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.607293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.607379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.607405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.102  qpair failed and we were unable to recover it.
00:26:09.102  [2024-12-09 04:16:37.607519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.102  [2024-12-09 04:16:37.607546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.607656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.607681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.607776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.607802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.607890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.607915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.608877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.608988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.609014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.609139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.609178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.609307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.609337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.609488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.609514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.609620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.609646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.609732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.609758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.609869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.609895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.610039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.610169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.610320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.610443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.610610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.610746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.610870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.610979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.611120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.611259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.611401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.611498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.611637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.611802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.611938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.611964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.612045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.612072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.612231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.612285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.612407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.612436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.612572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.103  [2024-12-09 04:16:37.612599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.103  qpair failed and we were unable to recover it.
00:26:09.103  [2024-12-09 04:16:37.612743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.612769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.612883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.612910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.613019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.613045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.613172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.613211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.613336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.613364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.613478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.613504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.613620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.613646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.613780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.613806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.613897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.613925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.614908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.614999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.615038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.615184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.615210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.615307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.615334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.615444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.615470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.615605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.615631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.615736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.615761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.615878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.615906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.616911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.616938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.617053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.617079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.617173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.617199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.617286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.617312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.617398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.617424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.617537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.617562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.617642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.617667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.104  qpair failed and we were unable to recover it.
00:26:09.104  [2024-12-09 04:16:37.617784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.104  [2024-12-09 04:16:37.617809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.617914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.617940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.618026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.618051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.618168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.618197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.618329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.618367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.618480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.618507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.618622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.618649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.618762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.618789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.618907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.618934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.619966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.619992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.620077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.620103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.620184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.620210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.620295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.620322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.620410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.620436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.620517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.620543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.620626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.105  [2024-12-09 04:16:37.620651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.105  qpair failed and we were unable to recover it.
00:26:09.105  [2024-12-09 04:16:37.620732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.620758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.620846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.620872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.620984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.621881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.621996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.622168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.622285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.622396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.622534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.622634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.622735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.622885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.622912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.623000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.623026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.623113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.623139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.390  qpair failed and we were unable to recover it.
00:26:09.390  [2024-12-09 04:16:37.623258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.390  [2024-12-09 04:16:37.623300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.623385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.623411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.623491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.623517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.623603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.623629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.623704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.623729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.623841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.623867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.623981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.624096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.624198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.624331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.624495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.624636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.624771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.624905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.624931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.625891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.625982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.626971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.626996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.627082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.627108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.627248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.627279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.627382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.627408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.627488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.627513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.627635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.627661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.627744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.627770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.627877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.391  [2024-12-09 04:16:37.627907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.391  qpair failed and we were unable to recover it.
00:26:09.391  [2024-12-09 04:16:37.628020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.628148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.628439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.628550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.628666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.628812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.628920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.628955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.629959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.629985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.630129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.630156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.630234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.630259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.630362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.630388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.630500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.630527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.630638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.630663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.630766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.630792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.630931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.630958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.631896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.631935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.632055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.632196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.632361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.632470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.632578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.632710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.392  [2024-12-09 04:16:37.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.392  qpair failed and we were unable to recover it.
00:26:09.392  [2024-12-09 04:16:37.632990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.633105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.633259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.633387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.633528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.633660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.633821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.633928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.633955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.634070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.634097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.634239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.634264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.634384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.634409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.634517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.634543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.634657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.634682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.634771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.634798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.634875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.634900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.635957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.635983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.636069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.636095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.636235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.636262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.636414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.636452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.636567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.636591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.636697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.636722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.636829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.636860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.636949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.636977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.637087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.637114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.637252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.637285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.637375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.637402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.637488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.637515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.637625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.637651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.637785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.637811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.637914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.637940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.638024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.393  [2024-12-09 04:16:37.638050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.393  qpair failed and we were unable to recover it.
00:26:09.393  [2024-12-09 04:16:37.638164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.638192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.638301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.638328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.638414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.638440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.638549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.638576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.638671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.638697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.638782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.638809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.638947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.638975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.639059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.639085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.639168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.639194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.639303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.639331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.639470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.639496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.639611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.639640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.639759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.639786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.639905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.639931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.640942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.640970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.641080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.641225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.641375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.641514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.641614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.641759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.641864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.641978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.642007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.642085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.642112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.642211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.642238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.642334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.642362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.642442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.642470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.642550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.642576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.642655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.394  [2024-12-09 04:16:37.642682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.394  qpair failed and we were unable to recover it.
00:26:09.394  [2024-12-09 04:16:37.642762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.642790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.642889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.642930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  [2024-12-09 04:16:37.643408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:09.395  [2024-12-09 04:16:37.643456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:09.395  [2024-12-09 04:16:37.643469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:09.395  [2024-12-09 04:16:37.643479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:09.395  [2024-12-09 04:16:37.643487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.643894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.643920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.644902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.644928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:26:09.395  [2024-12-09 04:16:37.645168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:26:09.395  [2024-12-09 04:16:37.645200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b9[2024-12-09 04:16:37.645130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:26:09.395  0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:26:09.395  [2024-12-09 04:16:37.645303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.645943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.645971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.646085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.646114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.646267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.646317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.646413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.646440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.646534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.395  [2024-12-09 04:16:37.646561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.395  qpair failed and we were unable to recover it.
00:26:09.395  [2024-12-09 04:16:37.646646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.646674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.646764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.646792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.646885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.646913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.647921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.647948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.648931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.648957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.649889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.649915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.650902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.650931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.651040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.651080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.651170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.651198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.651283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.396  [2024-12-09 04:16:37.651311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.396  qpair failed and we were unable to recover it.
00:26:09.396  [2024-12-09 04:16:37.651392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.651418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.651494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.651520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.651615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.651644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.651734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.651764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.651887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.651915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.651992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.652906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.652936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.653920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.653949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.654890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.654930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.655051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.655163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.655337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.655473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.655593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.655699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.397  [2024-12-09 04:16:37.655845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.397  qpair failed and we were unable to recover it.
00:26:09.397  [2024-12-09 04:16:37.655976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.656107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.656249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.656361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.656468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.656612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.656756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.656909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.656938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.657866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.657892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.658907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.658935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.659962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.659989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.660082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.660123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.660243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.660279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.660377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.660405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.660490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.660517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.398  [2024-12-09 04:16:37.660595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.398  [2024-12-09 04:16:37.660622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.398  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.660709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.660736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.660818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.660845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.660927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.660953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.661064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.661201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.661331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.661482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.661616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.661721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.661867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.661980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.662939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.662965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.663946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.663987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.664110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.664138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.664222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.664249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.664356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.664385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.664487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.664526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.664644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.664673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.664777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.399  [2024-12-09 04:16:37.664804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.399  qpair failed and we were unable to recover it.
00:26:09.399  [2024-12-09 04:16:37.664883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.664910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.664983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.665106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.665249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.665405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.665551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.665688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.665831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.665962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.666959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.666988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.667937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.667964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.668936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.668962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.669051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.669077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.669186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.669213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.669328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.669357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.669447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.669474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.669555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.400  [2024-12-09 04:16:37.669582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.400  qpair failed and we were unable to recover it.
00:26:09.400  [2024-12-09 04:16:37.669693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.669720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.669800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.669827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.669954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.669994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.670084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.670113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.670219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.670265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.670392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.670420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.670504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.670531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.670649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.670675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.670771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.670797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.670903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.670930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.671834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.671963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.672118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.672287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.672441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.672581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.672688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.672822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.672933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.672960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.673934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.673960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.674037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.674064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.674145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.674171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.674253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.674294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.674394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.401  [2024-12-09 04:16:37.674434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.401  qpair failed and we were unable to recover it.
00:26:09.401  [2024-12-09 04:16:37.674526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.674554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.674638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.674665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.674743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.674769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.674888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.674917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.674996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.675156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.675289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.675425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.675565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.675675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.675785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.675896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.675923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.676939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.676972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.677926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.677953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.678917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.678944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.679055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.402  [2024-12-09 04:16:37.679082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.402  qpair failed and we were unable to recover it.
00:26:09.402  [2024-12-09 04:16:37.679158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.679185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.679264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.679308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.679400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.679427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.679507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.679534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.679611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.679637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.679743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.679769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.679847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.679872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.680889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.680975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.681117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.681261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.681398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.681536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.681646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.681766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.681906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.681932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.682926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.682954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.403  qpair failed and we were unable to recover it.
00:26:09.403  [2024-12-09 04:16:37.683897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.403  [2024-12-09 04:16:37.683924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.684091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.684217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.684370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.684515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.684625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.684734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.684900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.684997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.685957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.685985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.686886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.686913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.404  qpair failed and we were unable to recover it.
00:26:09.404  [2024-12-09 04:16:37.687949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.404  [2024-12-09 04:16:37.687974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.688911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.688986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.689099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.689207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.689367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.689493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.689610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.689783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.689902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.689929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.690953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.690979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.691143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.691258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.691402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.691523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.691634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.691743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.691854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.691996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.692025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.692119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.692148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.692262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.692303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.692383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.692410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.692499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.692525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.405  qpair failed and we were unable to recover it.
00:26:09.405  [2024-12-09 04:16:37.692635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.405  [2024-12-09 04:16:37.692662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.692741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.692768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.692877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.692903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.692993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.693929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.693957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.694870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.694987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.695162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.695287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.695408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.695516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.695623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.695743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.695889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.696882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.696908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.697023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.697049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.697147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.697188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.406  qpair failed and we were unable to recover it.
00:26:09.406  [2024-12-09 04:16:37.697268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.406  [2024-12-09 04:16:37.697305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.697383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.697501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.697528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.697613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.697640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.697735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.697764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.697881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.697907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.698958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.698987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.699895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.699977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.700100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.700215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.700355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.700509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.700645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.700787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.700898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.700925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.701035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.701173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.701305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.701442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.701563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.701668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.407  [2024-12-09 04:16:37.701787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.407  qpair failed and we were unable to recover it.
00:26:09.407  [2024-12-09 04:16:37.701902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.701929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.702935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.702964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.703958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.703984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.704085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.704126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.704250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.704288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.704401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.704428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.704524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.704551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.704640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.704668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.704752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.704782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.704905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.704934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.705038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.705078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.705173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.705206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.705304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.705332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.705482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.705510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.705592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.705637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.705722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.705750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.705853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.705880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.706023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.706051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.706134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.706161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.706247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.408  [2024-12-09 04:16:37.706281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.408  qpair failed and we were unable to recover it.
00:26:09.408  [2024-12-09 04:16:37.706369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.706401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.706513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.706543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.706636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.706663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.706754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.706781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.706917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.706945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.707947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.707976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.708932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.708958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.709903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.709986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.710952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.409  [2024-12-09 04:16:37.710979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.409  qpair failed and we were unable to recover it.
00:26:09.409  [2024-12-09 04:16:37.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.711123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.711209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.711237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.711349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.711382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.711470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.711497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.711578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.711605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.711724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.711752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.711861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.711889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.711984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.712948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.712976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.713883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.713912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.714922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.714950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.715029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.715056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.715136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.715162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.715262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.715297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.715375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.715401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.715482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.715508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.715629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.410  [2024-12-09 04:16:37.715655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.410  qpair failed and we were unable to recover it.
00:26:09.410  [2024-12-09 04:16:37.715737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.715852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.715881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.715973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.716952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.716978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.717917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.717947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.718930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.718957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.719962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.719989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.720101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.720128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.720209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.720240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.411  [2024-12-09 04:16:37.720353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.411  [2024-12-09 04:16:37.720382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.411  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.720463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.720490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.720564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.720594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.720682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.720710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.720798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.720828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.720930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.720959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.721926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.721955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.722888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.722915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.723880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.723907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.724856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.724886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.412  [2024-12-09 04:16:37.725004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.412  [2024-12-09 04:16:37.725030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.412  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.725189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.725343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.725459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.725565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.725684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.725801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.725906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.725992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.726153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.726267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.726430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.726540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.726664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.726795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.726939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.726966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.727861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.727890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.728903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.728932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.729023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.729051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.729161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.413  [2024-12-09 04:16:37.729188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.413  qpair failed and we were unable to recover it.
00:26:09.413  [2024-12-09 04:16:37.729267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.729300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.729421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.729504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.729530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.729621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.729647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.729735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.729763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.729853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.729883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.729967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.729994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.730141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.730168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.730258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.730291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.730386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.730414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.730500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.730527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.730671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.730699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.730778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.730805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.730896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.730923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.731966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.731992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.732119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.732246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.732377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.732541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.732678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.732789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.732907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.732988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.414  [2024-12-09 04:16:37.733914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.414  qpair failed and we were unable to recover it.
00:26:09.414  [2024-12-09 04:16:37.733991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.734969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.734996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.735104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.735249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.735384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.735498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.735611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.735745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.735859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.735975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.736955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.736981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.737954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.737993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.738111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.738140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.738222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.738248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.738349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.738377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.738464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.738492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.738588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.415  [2024-12-09 04:16:37.738615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.415  qpair failed and we were unable to recover it.
00:26:09.415  [2024-12-09 04:16:37.738730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.738757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.738841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.738870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.738949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.738978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.739122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.739235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.739403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.739512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.739657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.739789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.739902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.739983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.740014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.740099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.740126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.740243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.740288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.740380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.740407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.740494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.740521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.740659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.740692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.740806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.740832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.740986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.741892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.741976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.742135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.742246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.742380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.742492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.742638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.742786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.742895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.742922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.743003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.743029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.743158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.743185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.743297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.743337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.743432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.743461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.416  [2024-12-09 04:16:37.743543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.416  [2024-12-09 04:16:37.743581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.416  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.743663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.743690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.743770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.743797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.743911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.743939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.744060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.744087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.744181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.744222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.744335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.744364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.744449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.744488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.744568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.744597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.744735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.744762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.744842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.744867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.745895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.745922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.746931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.746963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.747958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.747986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.417  qpair failed and we were unable to recover it.
00:26:09.417  [2024-12-09 04:16:37.748061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.417  [2024-12-09 04:16:37.748087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.748178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.748207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.748296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.748341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.748422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.748449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.748562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.748593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.748686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.748714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.748831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.748860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.748960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.748988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.749106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.749132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.751387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.751430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.751533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.751562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.751655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.751682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.751799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.751826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.751914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.751941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.752873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.752983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.753932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.753967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.754080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.754108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.754189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.754216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.754340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.754381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.754468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.754496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.754593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.754620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.418  qpair failed and we were unable to recover it.
00:26:09.418  [2024-12-09 04:16:37.754698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.418  [2024-12-09 04:16:37.754726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.754832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.754858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.754945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.754975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.755887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.755928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.756929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.756955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.757966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.757993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.758125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.758238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.758376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.758486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.758618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.758755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.758894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.758976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.759007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.759100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.759141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.759265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.759300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.419  [2024-12-09 04:16:37.759421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.419  [2024-12-09 04:16:37.759455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.419  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.759537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.759569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.759649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.759675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.759751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.759778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.759866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.759897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.759978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.760901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.760998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.761948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.761977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.762960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.762989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.420  qpair failed and we were unable to recover it.
00:26:09.420  [2024-12-09 04:16:37.763968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.420  [2024-12-09 04:16:37.763994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.764954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.764981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.765089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.765117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.765217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.765246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.765366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.765396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.765479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.765505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.765849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.765961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.765988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.766075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.766104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.766203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.766244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.766341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.766370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.766459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.766486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.766581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.766609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.766721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.766748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.766858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.766886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:09.421  [2024-12-09 04:16:37.766973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.767097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0
00:26:09.421  [2024-12-09 04:16:37.767212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.767324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.767442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:09.421  [2024-12-09 04:16:37.767559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.767682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:09.421  [2024-12-09 04:16:37.767797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.767910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.767941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.421  [2024-12-09 04:16:37.768049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.768076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.768161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.768187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.768267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.768300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.768410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.768436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.769353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.421  [2024-12-09 04:16:37.769387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.421  qpair failed and we were unable to recover it.
00:26:09.421  [2024-12-09 04:16:37.769492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.769531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.769616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.769642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.769731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.769758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.769868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.769896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.769989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.770123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.770260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.770368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.770472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.770625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.770790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.770901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.770928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.771065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.771093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.771176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.771203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.771322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.771350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.771456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.771482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.771597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.771634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.771718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.771745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.771864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.771891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.772887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.772913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.773026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.773053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.773146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.773185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.773285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.773314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.773395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.773423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.773502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.773534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.773616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.422  [2024-12-09 04:16:37.773643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.422  qpair failed and we were unable to recover it.
00:26:09.422  [2024-12-09 04:16:37.773716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.773742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.773826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.773852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.773925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.773951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.774852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.774969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.775951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.775977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.776898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.776982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.777146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.777283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.777387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.777499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.777625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.777725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.777866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.777892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.778004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.778030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.778116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.423  [2024-12-09 04:16:37.778142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.423  qpair failed and we were unable to recover it.
00:26:09.423  [2024-12-09 04:16:37.778253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.778293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.778389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.778416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.778502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.778536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.778621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.778648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.778734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.778761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.778877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.778904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.778988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.779894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.779983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.780010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.780124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.780151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.780654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.780688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.780818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.780845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.780953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.780983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.781877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.782966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.782992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.783075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.783101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.424  [2024-12-09 04:16:37.783184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.424  [2024-12-09 04:16:37.783211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.424  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.783296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.783322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.783395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.783424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.783547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.783573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.783682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.783707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.783792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.783818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.783911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.783938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.784894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.784980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.785905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.785930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.786020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.786045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.786158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.786184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:09.425  [2024-12-09 04:16:37.786318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.786347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.786424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:26:09.425  [2024-12-09 04:16:37.786451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.786543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:09.425  [2024-12-09 04:16:37.786570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.425  [2024-12-09 04:16:37.786693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.786723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.786811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.786836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.786943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.786970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.787062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.787095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.787208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.787236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.787344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.787372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.787451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.787483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.425  [2024-12-09 04:16:37.787565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.425  [2024-12-09 04:16:37.787592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.425  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.787680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.787709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.787832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.787859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.787945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.787971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.788907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.788996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.789972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.789998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.790877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.790903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.791939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.791966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.426  qpair failed and we were unable to recover it.
00:26:09.426  [2024-12-09 04:16:37.792063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.426  [2024-12-09 04:16:37.792094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.792191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.792245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.792377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.792466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.792493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.792578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.792607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.792685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.792711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.792804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.792831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.792916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.792942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.793954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.793981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.794966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.794994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.795891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.795919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.796003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.796029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.427  [2024-12-09 04:16:37.796120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.427  [2024-12-09 04:16:37.796149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.427  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.796230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.796303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.796395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.796422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.796533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.796559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.796648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.796675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.796774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.796813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.796911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.796941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.797905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.797931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.798956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.798984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.799078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.799105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.799220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.799249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.799379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.799406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.799497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.799527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.799621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.799648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.799755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.799781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.799893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.799931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.428  [2024-12-09 04:16:37.800879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.428  [2024-12-09 04:16:37.800916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.428  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.801907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.801997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.802880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.802974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.803103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.803207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.803462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.803591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.803728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.803869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.803972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.803997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.804924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.804979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.805097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.805125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.805202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.805229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.805324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.805351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.805466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.805491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.805578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.805614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.429  [2024-12-09 04:16:37.805724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.429  [2024-12-09 04:16:37.805753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.429  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.805836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.805867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.805981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.806919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.806944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.807961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.807992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.808105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.808131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.808221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.808267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.808369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.808398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.808476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.808503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.808597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.808624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.808742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.808768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.808858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.808891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.809911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.809998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.810025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.810103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.810131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.810219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.810247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.430  [2024-12-09 04:16:37.810334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.430  [2024-12-09 04:16:37.810362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.430  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.810474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.810501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.810591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.810617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.810729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.810755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.810844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.810874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.810984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.811124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.811223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.811343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.811494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.811627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.811746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.811893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.811921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.812945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.812971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.813877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.813905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.814014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.814042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.814170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.814208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.814297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.814331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.431  [2024-12-09 04:16:37.814441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.431  [2024-12-09 04:16:37.814467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.431  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.814546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.814572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.814694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.814720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.814818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.814846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.814931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.814957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.815896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.815921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.816949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.816991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.817107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.817300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.817405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.817528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.817644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.817758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.817870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.817977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.818915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.818942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.432  qpair failed and we were unable to recover it.
00:26:09.432  [2024-12-09 04:16:37.819020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.432  [2024-12-09 04:16:37.819046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.819935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.819962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.820956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.820983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.821937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.821965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.822886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.822913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.823004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.823030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.823129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.823156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.823242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.823269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.823374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.433  [2024-12-09 04:16:37.823401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.433  qpair failed and we were unable to recover it.
00:26:09.433  [2024-12-09 04:16:37.823482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.823510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.823625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.823652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.823742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.823770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.823856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.823886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.823997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.824893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.824976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.825110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.825251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.825386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.825489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.825617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.825764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.825898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.825925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.826893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.826921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.827926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.827951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.828044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.828082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.828173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.828201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.828327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.434  [2024-12-09 04:16:37.828356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.434  qpair failed and we were unable to recover it.
00:26:09.434  [2024-12-09 04:16:37.828442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.828467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.828544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.828569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.828683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.828709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.828792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.828827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.828907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.828933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.829905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.829984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.830086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.830201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.830347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.830459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.830601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.830704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.830817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  Malloc0
00:26:09.435  [2024-12-09 04:16:37.830935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.830960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.831037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.831176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:09.435  [2024-12-09 04:16:37.831295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.831398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:26:09.435  [2024-12-09 04:16:37.831510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:09.435  [2024-12-09 04:16:37.831667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.435   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.831787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.831903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.831929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.832875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.832903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.833011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.435  qpair failed and we were unable to recover it.
00:26:09.435  [2024-12-09 04:16:37.833127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.435  [2024-12-09 04:16:37.833157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.833240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.833278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.833369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.833406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.833491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.833517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.833604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.833629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.833700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.833726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.833822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.833853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.833936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.833963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:09.436  [2024-12-09 04:16:37.834651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.834880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.834979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.835947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.835973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.836054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.836082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.836170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.836196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.836283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.836311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.836399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.436  [2024-12-09 04:16:37.836425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.436  qpair failed and we were unable to recover it.
00:26:09.436  [2024-12-09 04:16:37.836519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.836550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.836632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.836658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.836775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.836801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.836894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.836926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.837912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.837949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.838899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.838926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.839972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.839998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.840096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.840123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.437  [2024-12-09 04:16:37.840213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.437  [2024-12-09 04:16:37.840238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.437  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.840325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.840352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.840445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.840471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.840559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.840597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.840679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.840711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.840834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.840862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.840943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.840969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.841861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.841974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.842856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.438   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.842973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.438   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.843901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.843929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.844017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.844043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.844128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.844170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.844268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.844306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.844398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.844424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.844515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.438  [2024-12-09 04:16:37.844542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.438  qpair failed and we were unable to recover it.
00:26:09.438  [2024-12-09 04:16:37.844653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.844679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.844763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.844789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.844867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.844895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.845973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.845998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.846914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.846939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.847950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.847976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.439  [2024-12-09 04:16:37.848972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.439  [2024-12-09 04:16:37.848998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.439  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.849893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.849926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.850041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.850177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.850335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.850456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.850573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.850675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.850777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:09.440  [2024-12-09 04:16:37.850893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.850920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:26:09.440  [2024-12-09 04:16:37.851012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.851040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.851122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:09.440  [2024-12-09 04:16:37.851148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.851233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.440  [2024-12-09 04:16:37.851259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.851365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.851393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.851469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.851510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.851635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.851662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.851771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.851796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.851869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.851895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.852931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.852970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.853052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.853079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.853168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.853195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.440  [2024-12-09 04:16:37.853307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.440  [2024-12-09 04:16:37.853334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.440  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.853417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.853443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.853526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.853551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.853628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.853654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.853730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.853756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.853832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.853857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.853941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.853967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.854931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.854959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.855905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.855985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.441  [2024-12-09 04:16:37.856970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.441  [2024-12-09 04:16:37.856996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.441  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.857911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.857996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:09.442  [2024-12-09 04:16:37.858888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.858965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.858992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.442   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.859089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.859116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:09.442  [2024-12-09 04:16:37.859232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.442  [2024-12-09 04:16:37.859257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.859367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.859393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.859479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.859505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.859583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.859609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.859688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.859713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.859821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.859846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.859943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.859968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.860953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.860982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.861068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.861093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.442  [2024-12-09 04:16:37.861169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.442  [2024-12-09 04:16:37.861194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.442  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.861305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.861331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.861412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.861437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.861522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.861549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.861633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.861661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.861746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.861772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.861857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.861884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.861966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.861991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.862074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.862102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.862211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.862237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.862325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.862355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.862440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.862467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.862548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.862574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.862657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:09.443  [2024-12-09 04:16:37.862683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.863129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:09.443  [2024-12-09 04:16:37.865462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.865587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.865615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.443  [2024-12-09 04:16:37.865631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.443  [2024-12-09 04:16:37.865643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.443  [2024-12-09 04:16:37.865678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:09.443   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:26:09.443   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:09.443   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:09.443   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:09.443   04:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 345231
00:26:09.443  [2024-12-09 04:16:37.875250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.875352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.875387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.443  [2024-12-09 04:16:37.875402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.443  [2024-12-09 04:16:37.875414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.443  [2024-12-09 04:16:37.875445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.885267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.885364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.885391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.443  [2024-12-09 04:16:37.885406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.443  [2024-12-09 04:16:37.885418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.443  [2024-12-09 04:16:37.885449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.895299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.895395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.895420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.443  [2024-12-09 04:16:37.895434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.443  [2024-12-09 04:16:37.895446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.443  [2024-12-09 04:16:37.895476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.905250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.905340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.905365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.443  [2024-12-09 04:16:37.905385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.443  [2024-12-09 04:16:37.905398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.443  [2024-12-09 04:16:37.905429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.915255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.915346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.915375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.443  [2024-12-09 04:16:37.915389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.443  [2024-12-09 04:16:37.915402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.443  [2024-12-09 04:16:37.915432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.925368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.925450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.925476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.443  [2024-12-09 04:16:37.925491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.443  [2024-12-09 04:16:37.925503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.443  [2024-12-09 04:16:37.925533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.443  qpair failed and we were unable to recover it.
00:26:09.443  [2024-12-09 04:16:37.935329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.443  [2024-12-09 04:16:37.935430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.443  [2024-12-09 04:16:37.935456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.444  [2024-12-09 04:16:37.935471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.444  [2024-12-09 04:16:37.935484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.444  [2024-12-09 04:16:37.935514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.444  qpair failed and we were unable to recover it.
00:26:09.703  [2024-12-09 04:16:37.945411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.703  [2024-12-09 04:16:37.945497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.703  [2024-12-09 04:16:37.945521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.703  [2024-12-09 04:16:37.945535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.703  [2024-12-09 04:16:37.945548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.703  [2024-12-09 04:16:37.945584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.703  qpair failed and we were unable to recover it.
00:26:09.703  [2024-12-09 04:16:37.955497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.703  [2024-12-09 04:16:37.955585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.703  [2024-12-09 04:16:37.955610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.703  [2024-12-09 04:16:37.955624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.703  [2024-12-09 04:16:37.955635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.703  [2024-12-09 04:16:37.955665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.703  qpair failed and we were unable to recover it.
00:26:09.703  [2024-12-09 04:16:37.965443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.703  [2024-12-09 04:16:37.965529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.703  [2024-12-09 04:16:37.965559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.703  [2024-12-09 04:16:37.965575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.703  [2024-12-09 04:16:37.965587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.703  [2024-12-09 04:16:37.965617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.703  qpair failed and we were unable to recover it.
00:26:09.703  [2024-12-09 04:16:37.975436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.703  [2024-12-09 04:16:37.975528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.703  [2024-12-09 04:16:37.975563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.703  [2024-12-09 04:16:37.975578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.703  [2024-12-09 04:16:37.975591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.703  [2024-12-09 04:16:37.975621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.703  qpair failed and we were unable to recover it.
00:26:09.703  [2024-12-09 04:16:37.985435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.703  [2024-12-09 04:16:37.985519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.703  [2024-12-09 04:16:37.985543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.703  [2024-12-09 04:16:37.985558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.703  [2024-12-09 04:16:37.985573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.703  [2024-12-09 04:16:37.985603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.703  qpair failed and we were unable to recover it.
00:26:09.703  [2024-12-09 04:16:37.995470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.703  [2024-12-09 04:16:37.995562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.703  [2024-12-09 04:16:37.995586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.703  [2024-12-09 04:16:37.995600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.703  [2024-12-09 04:16:37.995613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.703  [2024-12-09 04:16:37.995642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.703  qpair failed and we were unable to recover it.
00:26:09.703  [2024-12-09 04:16:38.005607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.703  [2024-12-09 04:16:38.005697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.703  [2024-12-09 04:16:38.005722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.703  [2024-12-09 04:16:38.005736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.703  [2024-12-09 04:16:38.005749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.005779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.015550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.015640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.015667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.015681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.015694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.015724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.025556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.025650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.025680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.025695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.025707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.025737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.035606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.035700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.035731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.035747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.035759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.035789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.045636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.045722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.045747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.045761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.045772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.045802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.055678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.055772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.055800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.055814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.055827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.055856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.065694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.065785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.065815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.065829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.065842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.065872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.075731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.075817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.075841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.075855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.075873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.075903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.085712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.085799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.085824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.085839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.085851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.085881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.095785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.095875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.095905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.095920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.095932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.095961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.105770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.105859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.105885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.105900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.105912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.105941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.115828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.115916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.115941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.115956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.115968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.115997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.125915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.125996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.126021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.126035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.126047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.126077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.135860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.135945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.135971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.135986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.135998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.136028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.145884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.145972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.145997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.146012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.146024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.146055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.155898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.155979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.156004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.156018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.156030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.156060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.165952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.704  [2024-12-09 04:16:38.166037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.704  [2024-12-09 04:16:38.166068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.704  [2024-12-09 04:16:38.166083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.704  [2024-12-09 04:16:38.166096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.704  [2024-12-09 04:16:38.166126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.704  qpair failed and we were unable to recover it.
00:26:09.704  [2024-12-09 04:16:38.175975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.176066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.176090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.176105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.176117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.176146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.186018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.186103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.186128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.186142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.186154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.186184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.196012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.196087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.196112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.196126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.196138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.196167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.206048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.206126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.206157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.206172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.206189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.206220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.216115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.216210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.216240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.216254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.216267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.216306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.226206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.226310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.226336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.226351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.226363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.226393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.236150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.236236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.236262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.236287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.236302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.236332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.246197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.246286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.246312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.246326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.246337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.246367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.256312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.256399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.256424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.256437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.256450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.256480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.266226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.266323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.266349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.266364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.266377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.266406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.705  [2024-12-09 04:16:38.276262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.705  [2024-12-09 04:16:38.276357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.705  [2024-12-09 04:16:38.276382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.705  [2024-12-09 04:16:38.276396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.705  [2024-12-09 04:16:38.276408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.705  [2024-12-09 04:16:38.276438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.705  qpair failed and we were unable to recover it.
00:26:09.964  [2024-12-09 04:16:38.286266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.964  [2024-12-09 04:16:38.286358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.964  [2024-12-09 04:16:38.286383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.964  [2024-12-09 04:16:38.286397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.964  [2024-12-09 04:16:38.286409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.964  [2024-12-09 04:16:38.286440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.964  qpair failed and we were unable to recover it.
00:26:09.964  [2024-12-09 04:16:38.296334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.964  [2024-12-09 04:16:38.296428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.964  [2024-12-09 04:16:38.296452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.964  [2024-12-09 04:16:38.296466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.964  [2024-12-09 04:16:38.296478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.964  [2024-12-09 04:16:38.296508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.964  qpair failed and we were unable to recover it.
00:26:09.964  [2024-12-09 04:16:38.306338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.964  [2024-12-09 04:16:38.306426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.964  [2024-12-09 04:16:38.306452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.964  [2024-12-09 04:16:38.306467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.964  [2024-12-09 04:16:38.306479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.964  [2024-12-09 04:16:38.306509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.964  qpair failed and we were unable to recover it.
00:26:09.964  [2024-12-09 04:16:38.316369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.964  [2024-12-09 04:16:38.316453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.964  [2024-12-09 04:16:38.316478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.964  [2024-12-09 04:16:38.316493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.964  [2024-12-09 04:16:38.316505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.964  [2024-12-09 04:16:38.316535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.964  qpair failed and we were unable to recover it.
00:26:09.964  [2024-12-09 04:16:38.326396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.964  [2024-12-09 04:16:38.326489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.964  [2024-12-09 04:16:38.326515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.964  [2024-12-09 04:16:38.326529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.964  [2024-12-09 04:16:38.326542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.964  [2024-12-09 04:16:38.326571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.964  qpair failed and we were unable to recover it.
00:26:09.964  [2024-12-09 04:16:38.336428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.964  [2024-12-09 04:16:38.336533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.964  [2024-12-09 04:16:38.336558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.964  [2024-12-09 04:16:38.336581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.964  [2024-12-09 04:16:38.336595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.336624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.346468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.346560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.346585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.346599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.346612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.346641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.356490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.356573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.356599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.356613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.356625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.356655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.366542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.366626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.366651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.366665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.366677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.366707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.376546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.376632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.376656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.376670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.376682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.376717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.386640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.386751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.386776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.386791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.386803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.386832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.396679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.396780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.396806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.396821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.396833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.396863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.406713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.406810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.406836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.406850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.406863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.406893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.416677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.416768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.416793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.416806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.416818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.416848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.426692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.426782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.426806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.426820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.426833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.426862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.436731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.436819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.436845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.436859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.436871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.436900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.446756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.446836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.446862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.446876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.446889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.446919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.456856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.456953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.456977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.456991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.457003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.457033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.466791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.466889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.466915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.466935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.466948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.965  [2024-12-09 04:16:38.466979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.965  qpair failed and we were unable to recover it.
00:26:09.965  [2024-12-09 04:16:38.476833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.965  [2024-12-09 04:16:38.476969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.965  [2024-12-09 04:16:38.476995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.965  [2024-12-09 04:16:38.477011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.965  [2024-12-09 04:16:38.477024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.966  [2024-12-09 04:16:38.477054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.966  qpair failed and we were unable to recover it.
00:26:09.966  [2024-12-09 04:16:38.486845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.966  [2024-12-09 04:16:38.486937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.966  [2024-12-09 04:16:38.486963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.966  [2024-12-09 04:16:38.486977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.966  [2024-12-09 04:16:38.486989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.966  [2024-12-09 04:16:38.487019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.966  qpair failed and we were unable to recover it.
00:26:09.966  [2024-12-09 04:16:38.496919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.966  [2024-12-09 04:16:38.497021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.966  [2024-12-09 04:16:38.497045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.966  [2024-12-09 04:16:38.497060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.966  [2024-12-09 04:16:38.497072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.966  [2024-12-09 04:16:38.497102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.966  qpair failed and we were unable to recover it.
00:26:09.966  [2024-12-09 04:16:38.506935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.966  [2024-12-09 04:16:38.507026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.966  [2024-12-09 04:16:38.507050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.966  [2024-12-09 04:16:38.507064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.966  [2024-12-09 04:16:38.507076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.966  [2024-12-09 04:16:38.507112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.966  qpair failed and we were unable to recover it.
00:26:09.966  [2024-12-09 04:16:38.516945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.966  [2024-12-09 04:16:38.517036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.966  [2024-12-09 04:16:38.517061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.966  [2024-12-09 04:16:38.517076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.966  [2024-12-09 04:16:38.517089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.966  [2024-12-09 04:16:38.517120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.966  qpair failed and we were unable to recover it.
00:26:09.966  [2024-12-09 04:16:38.527083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.966  [2024-12-09 04:16:38.527184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.966  [2024-12-09 04:16:38.527209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.966  [2024-12-09 04:16:38.527224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.966  [2024-12-09 04:16:38.527240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.966  [2024-12-09 04:16:38.527278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.966  qpair failed and we were unable to recover it.
00:26:09.966  [2024-12-09 04:16:38.537006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:09.966  [2024-12-09 04:16:38.537102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:09.966  [2024-12-09 04:16:38.537126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:09.966  [2024-12-09 04:16:38.537142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:09.966  [2024-12-09 04:16:38.537154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:09.966  [2024-12-09 04:16:38.537184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:09.966  qpair failed and we were unable to recover it.
00:26:10.225  [2024-12-09 04:16:38.547098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.225  [2024-12-09 04:16:38.547198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.225  [2024-12-09 04:16:38.547225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.225  [2024-12-09 04:16:38.547241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.225  [2024-12-09 04:16:38.547253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.225  [2024-12-09 04:16:38.547297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.225  qpair failed and we were unable to recover it.
00:26:10.225  [2024-12-09 04:16:38.557073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.225  [2024-12-09 04:16:38.557161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.225  [2024-12-09 04:16:38.557186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.225  [2024-12-09 04:16:38.557200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.225  [2024-12-09 04:16:38.557213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.225  [2024-12-09 04:16:38.557242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.225  qpair failed and we were unable to recover it.
00:26:10.225  [2024-12-09 04:16:38.567068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.225  [2024-12-09 04:16:38.567164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.225  [2024-12-09 04:16:38.567188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.225  [2024-12-09 04:16:38.567203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.225  [2024-12-09 04:16:38.567215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.225  [2024-12-09 04:16:38.567246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.225  qpair failed and we were unable to recover it.
00:26:10.225  [2024-12-09 04:16:38.577113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.225  [2024-12-09 04:16:38.577209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.225  [2024-12-09 04:16:38.577234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.225  [2024-12-09 04:16:38.577249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.225  [2024-12-09 04:16:38.577265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.225  [2024-12-09 04:16:38.577303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.225  qpair failed and we were unable to recover it.
00:26:10.225  [2024-12-09 04:16:38.587119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.225  [2024-12-09 04:16:38.587213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.225  [2024-12-09 04:16:38.587240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.225  [2024-12-09 04:16:38.587255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.225  [2024-12-09 04:16:38.587267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.225  [2024-12-09 04:16:38.587306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.225  qpair failed and we were unable to recover it.
00:26:10.225  [2024-12-09 04:16:38.597159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.225  [2024-12-09 04:16:38.597247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.225  [2024-12-09 04:16:38.597296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.225  [2024-12-09 04:16:38.597313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.225  [2024-12-09 04:16:38.597325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.597355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.607186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.607270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.607313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.607328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.607340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.607371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.617228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.617357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.617383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.617398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.617412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.617443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.627252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.627349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.627374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.627388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.627402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.627432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.637263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.637361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.637386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.637401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.637420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.637452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.647328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.647421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.647446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.647461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.647474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.647504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.657519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.657624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.657650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.657665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.657678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.657708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.667444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.667529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.667556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.667571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.667584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.667615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.677449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.677534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.677559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.677575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.677588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.677619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.687490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.687576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.687602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.687617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.226  [2024-12-09 04:16:38.687629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.226  [2024-12-09 04:16:38.687659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.226  qpair failed and we were unable to recover it.
00:26:10.226  [2024-12-09 04:16:38.697461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.226  [2024-12-09 04:16:38.697552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.226  [2024-12-09 04:16:38.697578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.226  [2024-12-09 04:16:38.697592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.697605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.697635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.707467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.707562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.707587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.707602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.707615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.707646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.717649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.717739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.717766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.717781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.717794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.717828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.727551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.727675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.727710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.727728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.727741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.727773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.737706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.737800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.737826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.737842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.737855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.737888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.747620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.747708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.747735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.747750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.747763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.747793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.757623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.757724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.757749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.757764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.757777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.757808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.767690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.767793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.767819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.767833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.767853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.767885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.777672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.777792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.777818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.777833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.777847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.777878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.787781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.787863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.787890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.787905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.227  [2024-12-09 04:16:38.787918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.227  [2024-12-09 04:16:38.787951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.227  qpair failed and we were unable to recover it.
00:26:10.227  [2024-12-09 04:16:38.797759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.227  [2024-12-09 04:16:38.797853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.227  [2024-12-09 04:16:38.797879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.227  [2024-12-09 04:16:38.797894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.228  [2024-12-09 04:16:38.797907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.228  [2024-12-09 04:16:38.797938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.228  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.807772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.807861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.807887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.807903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.807915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.807946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.817824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.817914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.817940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.817956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.817969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.817999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.827820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.827910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.827936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.827951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.827964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.827994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.837878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.837963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.837988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.838004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.838017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.838048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.847938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.848048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.848073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.848088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.848101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.848132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.858055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.858154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.858181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.858196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.858209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.858241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.867946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.868070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.868096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.868112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.868125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.868156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.877989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.878072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.878098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.487  [2024-12-09 04:16:38.878114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.487  [2024-12-09 04:16:38.878126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.487  [2024-12-09 04:16:38.878157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.487  qpair failed and we were unable to recover it.
00:26:10.487  [2024-12-09 04:16:38.887988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.487  [2024-12-09 04:16:38.888074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.487  [2024-12-09 04:16:38.888101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.888116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.888129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.888160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.898138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.898276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.898303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.898332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.898346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.898378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.908072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.908170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.908196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.908210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.908224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.908255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.918079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.918165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.918192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.918207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.918220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.918250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.928112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.928238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.928264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.928287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.928302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.928334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.938132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.938226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.938252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.938266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.938289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.938326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.948232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.948331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.948357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.948372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.948386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.948417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.958181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.958282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.958309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.958324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.958337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.958369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.968192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.968323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.968349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.968365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.968378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.968409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.978261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.978383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.978409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.978424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.978438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.978469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.988296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.988390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.988416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.988431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.988444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.988476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:38.998337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:38.998453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:38.998479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:38.998494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:38.998508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:38.998539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:39.008323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:39.008408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.488  [2024-12-09 04:16:39.008434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.488  [2024-12-09 04:16:39.008450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.488  [2024-12-09 04:16:39.008463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.488  [2024-12-09 04:16:39.008494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.488  qpair failed and we were unable to recover it.
00:26:10.488  [2024-12-09 04:16:39.018398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.488  [2024-12-09 04:16:39.018505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.489  [2024-12-09 04:16:39.018531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.489  [2024-12-09 04:16:39.018546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.489  [2024-12-09 04:16:39.018559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.489  [2024-12-09 04:16:39.018591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.489  qpair failed and we were unable to recover it.
00:26:10.489  [2024-12-09 04:16:39.028373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.489  [2024-12-09 04:16:39.028458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.489  [2024-12-09 04:16:39.028489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.489  [2024-12-09 04:16:39.028505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.489  [2024-12-09 04:16:39.028518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.489  [2024-12-09 04:16:39.028549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.489  qpair failed and we were unable to recover it.
00:26:10.489  [2024-12-09 04:16:39.038455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.489  [2024-12-09 04:16:39.038536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.489  [2024-12-09 04:16:39.038562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.489  [2024-12-09 04:16:39.038578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.489  [2024-12-09 04:16:39.038591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.489  [2024-12-09 04:16:39.038622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.489  qpair failed and we were unable to recover it.
00:26:10.489  [2024-12-09 04:16:39.048479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.489  [2024-12-09 04:16:39.048585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.489  [2024-12-09 04:16:39.048611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.489  [2024-12-09 04:16:39.048626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.489  [2024-12-09 04:16:39.048639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.489  [2024-12-09 04:16:39.048670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.489  qpair failed and we were unable to recover it.
00:26:10.489  [2024-12-09 04:16:39.058517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.489  [2024-12-09 04:16:39.058612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.489  [2024-12-09 04:16:39.058637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.489  [2024-12-09 04:16:39.058652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.489  [2024-12-09 04:16:39.058665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.489  [2024-12-09 04:16:39.058695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.489  qpair failed and we were unable to recover it.
00:26:10.748  [2024-12-09 04:16:39.068520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.748  [2024-12-09 04:16:39.068629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.748  [2024-12-09 04:16:39.068654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.748  [2024-12-09 04:16:39.068669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.748  [2024-12-09 04:16:39.068682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.748  [2024-12-09 04:16:39.068719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.748  qpair failed and we were unable to recover it.
00:26:10.748  [2024-12-09 04:16:39.078539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.748  [2024-12-09 04:16:39.078630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.748  [2024-12-09 04:16:39.078656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.748  [2024-12-09 04:16:39.078671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.748  [2024-12-09 04:16:39.078684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.748  [2024-12-09 04:16:39.078714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.748  qpair failed and we were unable to recover it.
00:26:10.748  [2024-12-09 04:16:39.088565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.748  [2024-12-09 04:16:39.088696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.748  [2024-12-09 04:16:39.088722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.748  [2024-12-09 04:16:39.088738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.748  [2024-12-09 04:16:39.088751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.748  [2024-12-09 04:16:39.088782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.748  qpair failed and we were unable to recover it.
00:26:10.748  [2024-12-09 04:16:39.098628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.748  [2024-12-09 04:16:39.098732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.748  [2024-12-09 04:16:39.098757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.748  [2024-12-09 04:16:39.098773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.748  [2024-12-09 04:16:39.098786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.748  [2024-12-09 04:16:39.098817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.748  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.108612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.108698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.108723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.108738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.108752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.108782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.118627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.118749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.118775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.118791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.118804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.118835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.128715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.128804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.128830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.128845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.128858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.128889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.138712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.138819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.138847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.138862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.138875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.138906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.148726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.148820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.148846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.148862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.148875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.148905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.158787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.158870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.158901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.158917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.158930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.158962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.168772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.168871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.168897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.168913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.168926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.168957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.178819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.178953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.178978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.178994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.179007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.179038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.188869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.188958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.188984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.188999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.189012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.189042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.198890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.198972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.198996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.199012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.199030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.199062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.208940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.209068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.209094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.209109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.209122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.209153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.218964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.219067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.219094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.219110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.219122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.219153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.228981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.749  [2024-12-09 04:16:39.229063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.749  [2024-12-09 04:16:39.229089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.749  [2024-12-09 04:16:39.229104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.749  [2024-12-09 04:16:39.229117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.749  [2024-12-09 04:16:39.229148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.749  qpair failed and we were unable to recover it.
00:26:10.749  [2024-12-09 04:16:39.238975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.239067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.239092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.239107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.239120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.239150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.249010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.249139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.249164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.249180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.249193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.249224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.259074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.259192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.259218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.259233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.259246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.259285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.269151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.269232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.269258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.269280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.269296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.269337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.279102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.279214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.279240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.279255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.279268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.279320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.289200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.289307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.289339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.289355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.289368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.289401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.299150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.299241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.299267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.299291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.299304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.299335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.309173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.309297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.309323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.309338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.309351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.309381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:10.750  [2024-12-09 04:16:39.319243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:10.750  [2024-12-09 04:16:39.319368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:10.750  [2024-12-09 04:16:39.319394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:10.750  [2024-12-09 04:16:39.319410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:10.750  [2024-12-09 04:16:39.319424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:10.750  [2024-12-09 04:16:39.319468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:10.750  qpair failed and we were unable to recover it.
00:26:11.009  [2024-12-09 04:16:39.329298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.009  [2024-12-09 04:16:39.329391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.009  [2024-12-09 04:16:39.329417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.009  [2024-12-09 04:16:39.329432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.009  [2024-12-09 04:16:39.329449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.009  [2024-12-09 04:16:39.329482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.009  qpair failed and we were unable to recover it.
00:26:11.009  [2024-12-09 04:16:39.339259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.009  [2024-12-09 04:16:39.339383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.009  [2024-12-09 04:16:39.339408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.009  [2024-12-09 04:16:39.339423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.009  [2024-12-09 04:16:39.339437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.009  [2024-12-09 04:16:39.339468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.009  qpair failed and we were unable to recover it.
00:26:11.009  [2024-12-09 04:16:39.349270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.349368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.349394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.349409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.349422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.349453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.359446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.359575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.359601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.359617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.359630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.359662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.369354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.369447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.369472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.369487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.369500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.369531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.379441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.379549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.379575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.379590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.379603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.379634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.389402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.389506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.389532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.389547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.389561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.389592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.399522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.399620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.399646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.399661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.399675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.399706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.409456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.409545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.409571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.409586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.409600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.409631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.419504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.419604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.419631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.419646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.419659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.419690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.429555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.429647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.429673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.429688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.429701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.429732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.439552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.439636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.439662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.439678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.439691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.439721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.449585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.449705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.449731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.449746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.449760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.449790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.459741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.459873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.459899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.459920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.459935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.010  [2024-12-09 04:16:39.459967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.010  qpair failed and we were unable to recover it.
00:26:11.010  [2024-12-09 04:16:39.469681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.010  [2024-12-09 04:16:39.469780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.010  [2024-12-09 04:16:39.469807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.010  [2024-12-09 04:16:39.469822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.010  [2024-12-09 04:16:39.469835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.469866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.479660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.479746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.479772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.479788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.479801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.479832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.489678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.489794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.489821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.489836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.489849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.489880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.499811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.499916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.499963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.499980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.499994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.500046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.509723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.509807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.509834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.509850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.509863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.509894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.519785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.519874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.519901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.519916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.519930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.519961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.529780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.529910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.529935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.529951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.529964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.529995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.539913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.540008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.540034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.540049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.540063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.540096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.549978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.550109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.550135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.550151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.550165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.550198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.559928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.560016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.560042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.560057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.560071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.560103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.569890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.569973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.569999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.570014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.570026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.570057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.011  [2024-12-09 04:16:39.579918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.011  [2024-12-09 04:16:39.580010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.011  [2024-12-09 04:16:39.580035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.011  [2024-12-09 04:16:39.580050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.011  [2024-12-09 04:16:39.580063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.011  [2024-12-09 04:16:39.580094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.011  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.589962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.590052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.590082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.590098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.590112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.590144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.600069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.600158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.600183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.600199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.600211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.600244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.610029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.610118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.610144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.610159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.610171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.610202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.620165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.620263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.620299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.620316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.620329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.620360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.630172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.630263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.630297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.630313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.630326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.630365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.640113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.640215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.640243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.640258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.640278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.640313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.650173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.650257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.650294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.650310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.650323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.650354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.660168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.660257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.660294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.660309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.660323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.271  [2024-12-09 04:16:39.660353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.271  qpair failed and we were unable to recover it.
00:26:11.271  [2024-12-09 04:16:39.670200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.271  [2024-12-09 04:16:39.670306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.271  [2024-12-09 04:16:39.670332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.271  [2024-12-09 04:16:39.670347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.271  [2024-12-09 04:16:39.670360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.670390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.680203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.680298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.680326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.680342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.680355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.680385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.690265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.690362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.690388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.690403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.690417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.690447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.700311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.700410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.700436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.700450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.700463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.700494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.710315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.710427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.710453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.710467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.710480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.710512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.720342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.720475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.720508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.720525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.720539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.720570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.730372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.730493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.730520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.730536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.730548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.730579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.740411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.740508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.740532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.740547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.740559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.740590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.750442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.750544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.750569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.750584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.750598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.750628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.760454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.760545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.760570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.760585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.760603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.760634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.770540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.770625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.770650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.770665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.770678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.770708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.780534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.780627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.780651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.780666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.780679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.780709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.790551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.790636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.790661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.790675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.790688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.272  [2024-12-09 04:16:39.790718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.272  qpair failed and we were unable to recover it.
00:26:11.272  [2024-12-09 04:16:39.800557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.272  [2024-12-09 04:16:39.800648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.272  [2024-12-09 04:16:39.800673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.272  [2024-12-09 04:16:39.800688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.272  [2024-12-09 04:16:39.800701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.273  [2024-12-09 04:16:39.800731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.273  qpair failed and we were unable to recover it.
00:26:11.273  [2024-12-09 04:16:39.810616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.273  [2024-12-09 04:16:39.810703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.273  [2024-12-09 04:16:39.810729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.273  [2024-12-09 04:16:39.810744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.273  [2024-12-09 04:16:39.810757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.273  [2024-12-09 04:16:39.810788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.273  qpair failed and we were unable to recover it.
00:26:11.273  [2024-12-09 04:16:39.820670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.273  [2024-12-09 04:16:39.820762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.273  [2024-12-09 04:16:39.820787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.273  [2024-12-09 04:16:39.820803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.273  [2024-12-09 04:16:39.820816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.273  [2024-12-09 04:16:39.820847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.273  qpair failed and we were unable to recover it.
00:26:11.273  [2024-12-09 04:16:39.830665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.273  [2024-12-09 04:16:39.830752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.273  [2024-12-09 04:16:39.830777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.273  [2024-12-09 04:16:39.830792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.273  [2024-12-09 04:16:39.830805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.273  [2024-12-09 04:16:39.830836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.273  qpair failed and we were unable to recover it.
00:26:11.273  [2024-12-09 04:16:39.840699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.273  [2024-12-09 04:16:39.840826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.273  [2024-12-09 04:16:39.840854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.273  [2024-12-09 04:16:39.840869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.273  [2024-12-09 04:16:39.840882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.273  [2024-12-09 04:16:39.840913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.273  qpair failed and we were unable to recover it.
00:26:11.532  [2024-12-09 04:16:39.850753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.532  [2024-12-09 04:16:39.850848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.532  [2024-12-09 04:16:39.850878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.532  [2024-12-09 04:16:39.850894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.532  [2024-12-09 04:16:39.850907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.532  [2024-12-09 04:16:39.850939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.532  qpair failed and we were unable to recover it.
00:26:11.532  [2024-12-09 04:16:39.860771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.532  [2024-12-09 04:16:39.860865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.532  [2024-12-09 04:16:39.860890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.532  [2024-12-09 04:16:39.860905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.532  [2024-12-09 04:16:39.860918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.860948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.870782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.870912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.870939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.870954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.870967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.870998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.880858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.880947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.880971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.880987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.881000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.881030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.890919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.891057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.891084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.891105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.891119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.891150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.900962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.901055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.901080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.901094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.901107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.901140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.910942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.911045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.911072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.911087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.911100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.911131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.920956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.921075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.921103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.921118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.921131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.921161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.930936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.931034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.931059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.931074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.931087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.931117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.941019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.941110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.941136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.941151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.941164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.941194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.950991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.951103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.951130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.951145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.951158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.951189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.961020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.961104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.961129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.961143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.961156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.961187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.971048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.971133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.971159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.971174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.971188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.971218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.981098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.981224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.981252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.981268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.981289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.533  [2024-12-09 04:16:39.981321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.533  qpair failed and we were unable to recover it.
00:26:11.533  [2024-12-09 04:16:39.991144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.533  [2024-12-09 04:16:39.991288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.533  [2024-12-09 04:16:39.991316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.533  [2024-12-09 04:16:39.991332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.533  [2024-12-09 04:16:39.991345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:39.991376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.001142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.001238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.001263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.001287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.001302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.001332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.011194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.011289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.011316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.011331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.011344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.011375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.021344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.021456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.021485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.021507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.021520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.021553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.031391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.031528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.031556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.031571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.031584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.031618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.041340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.041436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.041461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.041477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.041491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.041521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.051352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.051445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.051469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.051484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.051497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.051528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.061452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.061594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.061622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.061638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.061651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.061691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.071385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.071485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.071522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.071548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.071571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.071618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.081423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.081515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.081544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.081560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.081574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.081605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.091426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.091553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.091581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.091598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.091611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.091642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.534  [2024-12-09 04:16:40.101441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.534  [2024-12-09 04:16:40.101565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.534  [2024-12-09 04:16:40.101596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.534  [2024-12-09 04:16:40.101621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.534  [2024-12-09 04:16:40.101638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.534  [2024-12-09 04:16:40.101671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.534  qpair failed and we were unable to recover it.
00:26:11.793  [2024-12-09 04:16:40.111483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.793  [2024-12-09 04:16:40.111597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.793  [2024-12-09 04:16:40.111631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.793  [2024-12-09 04:16:40.111652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.793  [2024-12-09 04:16:40.111666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.793  [2024-12-09 04:16:40.111698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.793  qpair failed and we were unable to recover it.
00:26:11.793  [2024-12-09 04:16:40.121538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.793  [2024-12-09 04:16:40.121661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.121688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.121703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.121717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.121749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.131565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.131649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.131676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.131690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.131704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.131735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.141619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.141721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.141748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.141764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.141776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.141807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.151644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.151736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.151773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.151790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.151803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.151835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.161643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.161737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.161763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.161778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.161791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.161823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.171650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.171733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.171759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.171774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.171787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.171818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.181712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.181805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.181831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.181845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.181858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.181890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.191745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.191846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.191874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.191890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.191908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.191940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.201762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.201845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.201872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.201887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.201899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.201930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.211789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.211877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.211903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.211918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.211931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.211963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.221821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.221919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.221944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.221960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.221973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.222004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.231878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.231962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.231989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.232003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.232016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.232046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.241875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.794  [2024-12-09 04:16:40.241960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.794  [2024-12-09 04:16:40.241986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.794  [2024-12-09 04:16:40.242001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.794  [2024-12-09 04:16:40.242014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.794  [2024-12-09 04:16:40.242045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.794  qpair failed and we were unable to recover it.
00:26:11.794  [2024-12-09 04:16:40.251894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.252008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.252035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.252050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.252063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.252106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.261957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.262061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.262086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.262101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.262114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.262145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.271975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.272060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.272085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.272100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.272113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.272144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.281965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.282047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.282077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.282093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.282106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.282137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.291979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.292066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.292091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.292105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.292118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.292149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.302057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.302164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.302191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.302206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.302218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.302249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.312048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.312140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.312165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.312180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.312193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.312224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.322076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.322163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.322188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.322203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.322223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.322254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.332086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.332177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.332201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.332216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.332229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.332265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.342155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.342254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.342299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.342315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.342328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.342359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.352153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.352244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.352269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.352297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.352311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.352342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:11.795  [2024-12-09 04:16:40.362190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:11.795  [2024-12-09 04:16:40.362297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:11.795  [2024-12-09 04:16:40.362323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:11.795  [2024-12-09 04:16:40.362338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:11.795  [2024-12-09 04:16:40.362352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:11.795  [2024-12-09 04:16:40.362382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:11.795  qpair failed and we were unable to recover it.
00:26:12.054  [2024-12-09 04:16:40.372264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.054  [2024-12-09 04:16:40.372364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.054  [2024-12-09 04:16:40.372389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.054  [2024-12-09 04:16:40.372403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.054  [2024-12-09 04:16:40.372416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.054  [2024-12-09 04:16:40.372447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.054  qpair failed and we were unable to recover it.
00:26:12.054  [2024-12-09 04:16:40.382305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.054  [2024-12-09 04:16:40.382407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.054  [2024-12-09 04:16:40.382431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.054  [2024-12-09 04:16:40.382446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.054  [2024-12-09 04:16:40.382458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.054  [2024-12-09 04:16:40.382490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.054  qpair failed and we were unable to recover it.
00:26:12.054  [2024-12-09 04:16:40.392269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.054  [2024-12-09 04:16:40.392366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.054  [2024-12-09 04:16:40.392391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.054  [2024-12-09 04:16:40.392405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.054  [2024-12-09 04:16:40.392418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.054  [2024-12-09 04:16:40.392449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.054  qpair failed and we were unable to recover it.
00:26:12.054  [2024-12-09 04:16:40.402337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.054  [2024-12-09 04:16:40.402455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.402482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.402497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.402510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.402540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.412417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.412512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.412543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.412570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.412582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.412615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.422364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.422469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.422496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.422512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.422526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.422556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.432417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.432511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.432536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.432550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.432563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.432593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.442451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.442571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.442597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.442612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.442625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.442655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.452454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.452548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.452578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.452598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.452611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.452642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.462578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.462674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.462709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.462724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.462737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.462768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.472497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.472586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.472614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.472629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.472642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.472672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.482533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.482673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.482706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.482720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.482733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.482763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.492549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.492645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.492671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.492686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.492699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.492730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.502610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.502703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.502728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.502742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.502755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.502801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.512719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.512818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.512844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.512858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.512870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.512903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.522716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.055  [2024-12-09 04:16:40.522800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.055  [2024-12-09 04:16:40.522826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.055  [2024-12-09 04:16:40.522840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.055  [2024-12-09 04:16:40.522856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.055  [2024-12-09 04:16:40.522886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.055  qpair failed and we were unable to recover it.
00:26:12.055  [2024-12-09 04:16:40.532713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.532802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.532827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.532842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.532864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.532894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.542730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.542832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.542859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.542874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.542887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.542917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.552779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.552920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.552946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.552962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.552975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.553005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.562784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.562870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.562896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.562911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.562924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.562954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.572868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.572953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.572978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.572993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.573006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.573037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.582843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.582932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.582957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.582979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.582993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.583023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.592929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.593061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.593087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.593102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.593115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.593148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.602892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.603027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.603053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.603067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.603080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.603110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.612955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.613045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.613071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.613086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.613098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.613128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.056  [2024-12-09 04:16:40.622941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.056  [2024-12-09 04:16:40.623030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.056  [2024-12-09 04:16:40.623054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.056  [2024-12-09 04:16:40.623069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.056  [2024-12-09 04:16:40.623082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.056  [2024-12-09 04:16:40.623119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.056  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.632969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.633057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.633081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.633096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.633109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.633139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.643008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.643089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.643113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.643127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.643140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.643170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.653043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.653151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.653175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.653190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.653203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.653234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.663187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.663298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.663324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.663339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.663353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.663383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.673175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.673268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.673303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.673318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.673332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.673362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.683159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.683245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.683269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.683293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.683307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.683338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.693188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.693279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.693312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.693327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.693341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.693372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.703174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.703264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.703299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.703316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.703330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.703373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.713193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.713291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.713322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.713338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.713351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.713384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.723225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.723358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.723386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.723402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.723415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.723447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.733286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.733372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.733396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.315  [2024-12-09 04:16:40.733410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.315  [2024-12-09 04:16:40.733423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.315  [2024-12-09 04:16:40.733455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.315  qpair failed and we were unable to recover it.
00:26:12.315  [2024-12-09 04:16:40.743310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.315  [2024-12-09 04:16:40.743432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.315  [2024-12-09 04:16:40.743459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.743474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.743487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.743518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.753382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.753469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.753495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.753509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.753527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.753559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.763330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.763451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.763479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.763494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.763507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.763537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.773416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.773520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.773547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.773563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.773575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.773606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.783406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.783510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.783540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.783555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.783568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.783598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.793471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.793560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.793586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.793601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.793613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.793643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.803482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.803567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.803592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.803606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.803618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.803648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.813464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.813563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.813588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.813604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.813616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.813646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.823621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.823710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.823735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.823749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.823773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.823803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.833530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.833608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.833633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.833648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.833660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.833691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.843588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.843677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.843710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.843726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.843739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.843769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.853698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.853786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.853811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.853825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.853838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.853868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.863667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.863761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.863786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.863801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.863814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.863845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.873659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.873745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.873773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.873789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.873801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.873832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.316  [2024-12-09 04:16:40.883676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.316  [2024-12-09 04:16:40.883785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.316  [2024-12-09 04:16:40.883810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.316  [2024-12-09 04:16:40.883824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.316  [2024-12-09 04:16:40.883854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.316  [2024-12-09 04:16:40.883886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.316  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.893785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.893869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.893894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.893910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.893923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.574  [2024-12-09 04:16:40.893953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.574  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.903810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.903948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.903973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.903988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.904001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.574  [2024-12-09 04:16:40.904031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.574  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.913800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.913885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.913911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.913926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.913939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.574  [2024-12-09 04:16:40.913982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.574  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.923883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.923999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.924025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.924040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.924053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.574  [2024-12-09 04:16:40.924086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.574  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.933816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.933901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.933926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.933941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.933954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.574  [2024-12-09 04:16:40.933985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.574  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.943894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.944002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.944027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.944041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.944054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.574  [2024-12-09 04:16:40.944085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.574  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.953890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.953994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.954019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.954033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.954046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.574  [2024-12-09 04:16:40.954076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.574  qpair failed and we were unable to recover it.
00:26:12.574  [2024-12-09 04:16:40.963887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.574  [2024-12-09 04:16:40.963973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.574  [2024-12-09 04:16:40.963999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.574  [2024-12-09 04:16:40.964015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.574  [2024-12-09 04:16:40.964028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:40.964058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:40.973942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:40.974024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:40.974055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:40.974071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:40.974084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:40.974127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:40.983966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:40.984058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:40.984084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:40.984099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:40.984112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:40.984142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:40.994033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:40.994121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:40.994147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:40.994162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:40.994175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:40.994205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.004021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.004103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.004128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.004143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.004156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.004187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.014051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.014166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.014193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.014214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.014228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.014281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.024073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.024160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.024186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.024201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.024214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.024245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.034099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.034186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.034211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.034226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.034239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.034270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.044151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.044264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.044296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.044312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.044324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.044355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.054252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.054344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.054370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.054385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.054398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.054429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.064171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.064310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.064336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.064351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.064364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.064395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.074199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.074336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.074362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.074377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.074391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.074422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.084226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.084320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.084345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.084360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.084373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.084403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.575  qpair failed and we were unable to recover it.
00:26:12.575  [2024-12-09 04:16:41.094250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.575  [2024-12-09 04:16:41.094339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.575  [2024-12-09 04:16:41.094364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.575  [2024-12-09 04:16:41.094379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.575  [2024-12-09 04:16:41.094391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.575  [2024-12-09 04:16:41.094424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.576  qpair failed and we were unable to recover it.
00:26:12.576  [2024-12-09 04:16:41.104315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.576  [2024-12-09 04:16:41.104412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.576  [2024-12-09 04:16:41.104437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.576  [2024-12-09 04:16:41.104453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.576  [2024-12-09 04:16:41.104466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.576  [2024-12-09 04:16:41.104497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.576  qpair failed and we were unable to recover it.
00:26:12.576  [2024-12-09 04:16:41.114418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.576  [2024-12-09 04:16:41.114507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.576  [2024-12-09 04:16:41.114532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.576  [2024-12-09 04:16:41.114547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.576  [2024-12-09 04:16:41.114561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.576  [2024-12-09 04:16:41.114591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.576  qpair failed and we were unable to recover it.
00:26:12.576  [2024-12-09 04:16:41.124372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.576  [2024-12-09 04:16:41.124491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.576  [2024-12-09 04:16:41.124516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.576  [2024-12-09 04:16:41.124531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.576  [2024-12-09 04:16:41.124545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.576  [2024-12-09 04:16:41.124576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.576  qpair failed and we were unable to recover it.
00:26:12.576  [2024-12-09 04:16:41.134396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.576  [2024-12-09 04:16:41.134483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.576  [2024-12-09 04:16:41.134508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.576  [2024-12-09 04:16:41.134523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.576  [2024-12-09 04:16:41.134536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.576  [2024-12-09 04:16:41.134567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.576  qpair failed and we were unable to recover it.
00:26:12.576  [2024-12-09 04:16:41.144421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.576  [2024-12-09 04:16:41.144508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.576  [2024-12-09 04:16:41.144532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.576  [2024-12-09 04:16:41.144553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.576  [2024-12-09 04:16:41.144567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.576  [2024-12-09 04:16:41.144598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.576  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.154446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.154531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.154556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.154571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.154583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.154613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.164460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.164540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.164565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.164580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.164593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.164625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.174498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.174590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.174614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.174629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.174642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.174672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.184588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.184681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.184706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.184721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.184734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.184770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.194566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.194701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.194726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.194741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.194754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.194784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.204632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.204723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.204748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.204764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.204777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.204807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.214649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.214750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.214776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.214791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.214805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.214836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.224642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.224736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.224764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.224780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.224793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.224824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.234728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.234812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.234837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.234852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.234865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.234896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.244694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.244802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.244827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.244842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.244856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.244887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.254781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.254896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.254921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.254936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.254949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.254980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.264779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.264874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.264899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.264914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.264927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.264957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.274762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.274845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.834  [2024-12-09 04:16:41.274876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.834  [2024-12-09 04:16:41.274892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.834  [2024-12-09 04:16:41.274905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.834  [2024-12-09 04:16:41.274935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.834  qpair failed and we were unable to recover it.
00:26:12.834  [2024-12-09 04:16:41.284810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.834  [2024-12-09 04:16:41.284924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.284948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.284963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.284976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.285007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.294835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.294916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.294941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.294957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.294970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.295000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.304883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.304970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.304995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.305009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.305023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.305053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.314908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.314997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.315022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.315037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.315055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.315087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.324936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.325031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.325057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.325072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.325085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.325116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.334960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.335059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.335084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.335099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.335112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.335142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.344976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.345065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.345090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.345105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.345118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.345148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.355037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.355126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.355151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.355165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.355178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.355208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.365026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.365118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.365143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.365158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.365171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.365202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.375064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.375165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.375194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.375210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.375223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.375254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.385192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.385311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.385338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.385353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.385366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.385400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.395147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.395294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.395319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.395334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.395348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.395379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:12.835  [2024-12-09 04:16:41.405148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:12.835  [2024-12-09 04:16:41.405233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:12.835  [2024-12-09 04:16:41.405264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:12.835  [2024-12-09 04:16:41.405292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:12.835  [2024-12-09 04:16:41.405306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:12.835  [2024-12-09 04:16:41.405336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:12.835  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.415169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.415258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.415293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.415315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.415328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.415359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.425212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.425313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.425338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.425353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.425366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.425397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.435242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.435344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.435375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.435393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.435406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.435438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.445266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.445362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.445388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.445403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.445421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.445453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.455371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.455455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.455481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.455495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.455509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.455540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.465377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.465486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.465511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.465526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.465539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.465570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.475344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.475429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.475455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.475470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.475483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.475513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.485396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.485484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.485509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.485535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.485548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.485578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.495404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.495485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.495510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.495525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.495538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.495568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.505449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.505549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.505573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.505588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.505601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.505632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.515472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.515561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.515586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.515601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.515614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.515644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.525622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.525757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.525786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.525803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.525816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.525847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.535549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.535674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.535707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.535723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.535737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.535768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.545595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.545690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.545715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.545731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.545744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.545774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.555590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.555670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.555696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.555711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.555724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.555754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.565640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.565730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.565760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.565776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.565790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.565821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.575685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.575775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.575805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.575827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.575841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.575872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.585685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.585773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.585799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.585813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.585827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.585857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.595734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.595818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.595843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.595857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.595870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.595900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.605765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.605851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.605876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.605891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.605904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.605933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.615862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.615946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.615972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.615986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.615999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.616034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.625852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.625944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.625968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.625983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.625996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.626026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.635864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.094  [2024-12-09 04:16:41.635995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.094  [2024-12-09 04:16:41.636025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.094  [2024-12-09 04:16:41.636041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.094  [2024-12-09 04:16:41.636054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.094  [2024-12-09 04:16:41.636085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.094  qpair failed and we were unable to recover it.
00:26:13.094  [2024-12-09 04:16:41.645967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.095  [2024-12-09 04:16:41.646045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.095  [2024-12-09 04:16:41.646070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.095  [2024-12-09 04:16:41.646085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.095  [2024-12-09 04:16:41.646097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.095  [2024-12-09 04:16:41.646127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.095  qpair failed and we were unable to recover it.
00:26:13.095  [2024-12-09 04:16:41.655997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.095  [2024-12-09 04:16:41.656075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.095  [2024-12-09 04:16:41.656100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.095  [2024-12-09 04:16:41.656114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.095  [2024-12-09 04:16:41.656128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.095  [2024-12-09 04:16:41.656160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.095  qpair failed and we were unable to recover it.
00:26:13.095  [2024-12-09 04:16:41.665931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.095  [2024-12-09 04:16:41.666056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.095  [2024-12-09 04:16:41.666083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.095  [2024-12-09 04:16:41.666098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.095  [2024-12-09 04:16:41.666111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.095  [2024-12-09 04:16:41.666142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.095  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.675965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.676056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.676080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.676095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.676108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.676150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.686007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.686137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.686164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.686180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.686193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.686223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.696031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.696115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.696140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.696154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.696167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.696196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.706048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.706137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.706162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.706183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.706196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.706226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.716065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.716196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.716222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.716236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.716250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.716288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.726074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.726159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.726185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.726199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.726212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.726242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.736135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.736254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.736288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.736305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.736319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.736362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.746154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.746242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.746267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.746291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.746305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.746341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.353  qpair failed and we were unable to recover it.
00:26:13.353  [2024-12-09 04:16:41.756160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.353  [2024-12-09 04:16:41.756248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.353  [2024-12-09 04:16:41.756281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.353  [2024-12-09 04:16:41.756298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.353  [2024-12-09 04:16:41.756312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.353  [2024-12-09 04:16:41.756342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.766303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.766393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.766420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.766435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.766448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.766479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.776237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.776330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.776355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.776371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.776384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.776414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.786342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.786432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.786462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.786478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.786491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.786522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.796357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.796440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.796466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.796481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.796494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.796525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.806317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.806404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.806430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.806445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.806458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.806489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.816440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.816524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.816552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.816568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.816581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.816611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.826398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.826491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.826517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.826531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.826544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.826575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.836443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.836556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.836588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.836605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.836618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.836662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.846520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.846607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.846632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.846647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.846660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.846690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.856502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.856592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.856616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.856631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.856644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.856674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.866553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.866664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.866691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.866706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.866719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.866763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.876524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.876610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.876635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.876650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.876668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.876701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.354  [2024-12-09 04:16:41.886576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.354  [2024-12-09 04:16:41.886661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.354  [2024-12-09 04:16:41.886686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.354  [2024-12-09 04:16:41.886701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.354  [2024-12-09 04:16:41.886714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.354  [2024-12-09 04:16:41.886745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.354  qpair failed and we were unable to recover it.
00:26:13.355  [2024-12-09 04:16:41.896567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.355  [2024-12-09 04:16:41.896649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.355  [2024-12-09 04:16:41.896673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.355  [2024-12-09 04:16:41.896688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.355  [2024-12-09 04:16:41.896700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.355  [2024-12-09 04:16:41.896730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.355  qpair failed and we were unable to recover it.
00:26:13.355  [2024-12-09 04:16:41.906644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.355  [2024-12-09 04:16:41.906752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.355  [2024-12-09 04:16:41.906779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.355  [2024-12-09 04:16:41.906795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.355  [2024-12-09 04:16:41.906807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.355  [2024-12-09 04:16:41.906838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.355  qpair failed and we were unable to recover it.
00:26:13.355  [2024-12-09 04:16:41.916725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.355  [2024-12-09 04:16:41.916814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.355  [2024-12-09 04:16:41.916840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.355  [2024-12-09 04:16:41.916856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.355  [2024-12-09 04:16:41.916869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.355  [2024-12-09 04:16:41.916899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.355  qpair failed and we were unable to recover it.
00:26:13.355  [2024-12-09 04:16:41.926665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.355  [2024-12-09 04:16:41.926749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.355  [2024-12-09 04:16:41.926776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.355  [2024-12-09 04:16:41.926792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.355  [2024-12-09 04:16:41.926805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.355  [2024-12-09 04:16:41.926835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.355  qpair failed and we were unable to recover it.
00:26:13.613  [2024-12-09 04:16:41.936707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.613  [2024-12-09 04:16:41.936794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.613  [2024-12-09 04:16:41.936823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.613  [2024-12-09 04:16:41.936838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.613  [2024-12-09 04:16:41.936851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.613  [2024-12-09 04:16:41.936882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.613  qpair failed and we were unable to recover it.
00:26:13.613  [2024-12-09 04:16:41.946757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:41.946846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:41.946873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:41.946888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:41.946902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:41.946932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:41.956754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:41.956843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:41.956870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:41.956886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:41.956899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:41.956928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:41.966768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:41.966853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:41.966883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:41.966899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:41.966912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:41.966942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:41.976783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:41.976912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:41.976939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:41.976955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:41.976968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:41.976998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:41.986847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:41.986963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:41.986990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:41.987005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:41.987018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:41.987060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:41.996851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:41.996939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:41.996964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:41.996978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:41.996992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:41.997022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.006945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:42.007042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:42.007066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:42.007081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:42.007099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:42.007130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.016990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:42.017121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:42.017148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:42.017163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:42.017176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:42.017206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.026945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:42.027037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:42.027062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:42.027076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:42.027089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:42.027119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.037061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:42.037151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:42.037175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:42.037190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:42.037203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:42.037233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.047047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:42.047141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:42.047165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:42.047180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:42.047193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:42.047222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.057033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:42.057113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:42.057137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:42.057152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:42.057165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:42.057195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.067061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.614  [2024-12-09 04:16:42.067150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.614  [2024-12-09 04:16:42.067175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.614  [2024-12-09 04:16:42.067190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.614  [2024-12-09 04:16:42.067203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.614  [2024-12-09 04:16:42.067234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.614  qpair failed and we were unable to recover it.
00:26:13.614  [2024-12-09 04:16:42.077184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.077298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.077326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.077341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.077354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.077386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.087110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.087198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.087225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.087241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.087254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.087294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.097150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.097232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.097262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.097290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.097305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.097349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.107172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.107302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.107329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.107344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.107357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.107388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.117296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.117398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.117424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.117439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.117452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.117483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.127232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.127359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.127386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.127402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.127414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.127445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.137253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.137345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.137374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.137397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.137411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.137443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.147330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.147428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.147456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.147473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.147487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.147519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.157321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.157408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.157434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.157449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.157463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.157494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.167423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.167508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.167534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.167549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.167562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.167593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.177489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.177591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.177616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.177631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.177644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.177682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.615  [2024-12-09 04:16:42.187418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.615  [2024-12-09 04:16:42.187511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.615  [2024-12-09 04:16:42.187537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.615  [2024-12-09 04:16:42.187552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.615  [2024-12-09 04:16:42.187565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.615  [2024-12-09 04:16:42.187595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.615  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.197457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.197546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.197575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.197592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.197605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.197636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.207477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.207567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.207596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.207611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.207625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.207655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.217509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.217594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.217619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.217633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.217646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.217677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.227539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.227637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.227662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.227677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.227690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.227721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.237546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.237657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.237681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.237696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.237709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.237739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.247610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.247720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.247747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.247762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.247774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.247805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.257620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.257714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.257739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.257754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.257767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.257798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.267644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.267738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.267764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.267786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.267801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.267831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.277658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.277739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.277764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.277779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.277792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.277822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.287720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.287810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.287839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.287856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.287869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.287901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.297798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.297886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.297914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.297932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.297946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.297977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.307735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.875  [2024-12-09 04:16:42.307827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.875  [2024-12-09 04:16:42.307852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.875  [2024-12-09 04:16:42.307867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.875  [2024-12-09 04:16:42.307881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.875  [2024-12-09 04:16:42.307917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.875  qpair failed and we were unable to recover it.
00:26:13.875  [2024-12-09 04:16:42.317769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.317857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.317883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.317898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.317911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.317942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.327784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.327876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.327902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.327917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.327930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.327961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.337935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.338017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.338042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.338056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.338069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.338099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.347890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.347983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.348008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.348023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.348036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.348066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.357982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.358066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.358090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.358106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.358119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.358148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.367905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.367989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.368014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.368029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.368043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.368073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.377939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.378058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.378085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.378100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.378113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.378143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.387969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.388096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.388123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.388139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.388151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.388182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.398038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.398138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.398170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.398187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.398200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.398230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.408017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.408105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.408129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.408144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.408157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.408187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.418045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.418134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.418159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.418174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.418186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.418217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.428103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.428195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.428220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.428235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.428248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.428286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.438195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.876  [2024-12-09 04:16:42.438295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.876  [2024-12-09 04:16:42.438324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.876  [2024-12-09 04:16:42.438340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.876  [2024-12-09 04:16:42.438359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.876  [2024-12-09 04:16:42.438394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.876  qpair failed and we were unable to recover it.
00:26:13.876  [2024-12-09 04:16:42.448171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:13.877  [2024-12-09 04:16:42.448294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:13.877  [2024-12-09 04:16:42.448325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:13.877  [2024-12-09 04:16:42.448342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:13.877  [2024-12-09 04:16:42.448356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:13.877  [2024-12-09 04:16:42.448387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:13.877  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.458210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.458308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.458334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.458348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.458362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.458393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.468310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.468415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.468443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.468460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.468474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.468507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.478215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.478308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.478334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.478349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.478362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.478394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.488257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.488352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.488377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.488392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.488405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.488436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.498263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.498359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.498385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.498399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.498413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.498443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.508372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.508464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.508489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.508504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.508517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.508547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.518428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.518516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.518544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.518560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.518573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.518606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.528363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.528448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.528480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.528497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.528510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.528541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.538396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.538483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.538508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.137  [2024-12-09 04:16:42.538523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.137  [2024-12-09 04:16:42.538537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.137  [2024-12-09 04:16:42.538567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.137  qpair failed and we were unable to recover it.
00:26:14.137  [2024-12-09 04:16:42.548462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.137  [2024-12-09 04:16:42.548551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.137  [2024-12-09 04:16:42.548575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.548590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.548603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.548634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.558479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.558568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.558593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.558607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.558620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.558651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.568463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.568544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.568569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.568583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.568602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.568634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.578518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.578610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.578635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.578650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.578663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.578693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.588580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.588675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.588699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.588714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.588727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.588757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.598670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.598762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.598787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.598801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.598814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.598843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.608721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.608806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.608835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.608852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.608865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.608896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.618603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.618688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.618714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.618729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.618741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.618771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.628680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.628775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.628800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.628815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.628828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.628859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.638691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.638791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.638818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.638833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.638846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.638877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.648727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.648808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.648837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.648853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.648866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.648896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.658771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.658856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.658890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.658907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.658920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.658950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.668781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.668872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.668900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.668915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.138  [2024-12-09 04:16:42.668928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.138  [2024-12-09 04:16:42.668959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.138  qpair failed and we were unable to recover it.
00:26:14.138  [2024-12-09 04:16:42.678799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.138  [2024-12-09 04:16:42.678927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.138  [2024-12-09 04:16:42.678952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.138  [2024-12-09 04:16:42.678967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.139  [2024-12-09 04:16:42.678980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.139  [2024-12-09 04:16:42.679010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.139  qpair failed and we were unable to recover it.
00:26:14.139  [2024-12-09 04:16:42.688930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.139  [2024-12-09 04:16:42.689046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.139  [2024-12-09 04:16:42.689072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.139  [2024-12-09 04:16:42.689087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.139  [2024-12-09 04:16:42.689101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.139  [2024-12-09 04:16:42.689134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.139  qpair failed and we were unable to recover it.
00:26:14.139  [2024-12-09 04:16:42.698905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.139  [2024-12-09 04:16:42.699004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.139  [2024-12-09 04:16:42.699029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.139  [2024-12-09 04:16:42.699050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.139  [2024-12-09 04:16:42.699063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.139  [2024-12-09 04:16:42.699095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.139  qpair failed and we were unable to recover it.
00:26:14.139  [2024-12-09 04:16:42.708970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.139  [2024-12-09 04:16:42.709108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.139  [2024-12-09 04:16:42.709133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.139  [2024-12-09 04:16:42.709149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.139  [2024-12-09 04:16:42.709161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.139  [2024-12-09 04:16:42.709191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.139  qpair failed and we were unable to recover it.
00:26:14.397  [2024-12-09 04:16:42.718967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.397  [2024-12-09 04:16:42.719086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.397  [2024-12-09 04:16:42.719112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.397  [2024-12-09 04:16:42.719128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.397  [2024-12-09 04:16:42.719141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.397  [2024-12-09 04:16:42.719172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.397  qpair failed and we were unable to recover it.
00:26:14.397  [2024-12-09 04:16:42.729031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.397  [2024-12-09 04:16:42.729118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.397  [2024-12-09 04:16:42.729144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.397  [2024-12-09 04:16:42.729159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.397  [2024-12-09 04:16:42.729173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.397  [2024-12-09 04:16:42.729204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.397  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.738975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.739068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.739096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.739113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.739127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.739165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.748994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.749089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.749114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.749129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.749142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.749173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.759039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.759132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.759157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.759173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.759185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.759216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.769113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.769209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.769235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.769250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.769263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.769303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.779057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.779144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.779169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.779184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.779197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.779227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.789115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.789221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.789246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.789261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.789282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.789315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.799121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.799216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.799241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.799256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.799269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.799308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.809150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.809245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.809269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.809297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.809311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.809341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.819185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.819278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.819320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.819336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.819348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.819379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.829230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.829330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.829358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.829379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.829394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.829425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.839228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.839319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.839345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.839360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.839373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.839403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.849301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.849392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.849418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.849433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.849446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.849477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.398  qpair failed and we were unable to recover it.
00:26:14.398  [2024-12-09 04:16:42.859309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.398  [2024-12-09 04:16:42.859420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.398  [2024-12-09 04:16:42.859445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.398  [2024-12-09 04:16:42.859461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.398  [2024-12-09 04:16:42.859474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.398  [2024-12-09 04:16:42.859517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.869352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.869451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.869479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.869496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.869510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.869547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.879394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.879488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.879514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.879529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.879542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.879573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.889434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.889517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.889542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.889558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.889570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.889601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.899433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.899565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.899590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.899605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.899618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.899648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.909441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.909530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.909556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.909572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.909585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.909615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.919486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.919603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.919628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.919644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.919658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.919688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.929507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.929642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.929667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.929682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.929695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.929726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.939520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.939605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.939633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.939649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.939663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.939693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.949656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.949752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.949777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.949793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.949806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.949837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.959624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.959744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.959777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.959794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.959807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.959837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.399  [2024-12-09 04:16:42.969620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.399  [2024-12-09 04:16:42.969708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.399  [2024-12-09 04:16:42.969733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.399  [2024-12-09 04:16:42.969748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.399  [2024-12-09 04:16:42.969761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.399  [2024-12-09 04:16:42.969791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.399  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:42.979765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:42.979850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:42.979875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:42.979890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:42.979902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:42.979932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:42.989661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:42.989789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:42.989816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:42.989833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:42.989847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:42.989879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:42.999757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:42.999866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:42.999891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:42.999906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:42.999925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:42.999956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.009724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.009802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.009826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.009841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.009854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.009884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.019744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.019829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.019853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.019868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.019880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.019911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.029913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.030047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.030072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.030087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.030100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.030133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.039854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.039938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.039964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.039979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.039991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.040021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.049867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.049954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.049979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.049994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.050006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.050036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.059886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.060010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.060041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.060059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.060072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.060103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.069931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.070026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.070051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.070067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.070085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.070116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.079970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.658  [2024-12-09 04:16:43.080092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.658  [2024-12-09 04:16:43.080117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.658  [2024-12-09 04:16:43.080132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.658  [2024-12-09 04:16:43.080145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.658  [2024-12-09 04:16:43.080176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.658  qpair failed and we were unable to recover it.
00:26:14.658  [2024-12-09 04:16:43.090025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.090108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.090148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.090164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.090176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.090207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.100007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.100128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.100152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.100166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.100180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.100213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.110056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.110171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.110196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.110211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.110224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.110254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.120051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.120135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.120161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.120176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.120188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.120218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.130087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.130171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.130197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.130213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.130233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.130264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.140094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.140180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.140206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.140221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.140234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.140264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.150124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.150216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.150242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.150257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.150270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.150312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.160147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.160234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.160259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.160281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.160296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.160327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.170263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.170362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.170387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.170401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.170415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.170445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.180205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.180319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.180345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.180359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.180372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.180403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.190263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.190370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.190395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.190410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.190422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.190454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.200290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.200405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.200430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.200445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.200459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.200489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.659  [2024-12-09 04:16:43.210387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.659  [2024-12-09 04:16:43.210518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.659  [2024-12-09 04:16:43.210543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.659  [2024-12-09 04:16:43.210559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.659  [2024-12-09 04:16:43.210572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.659  [2024-12-09 04:16:43.210602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.659  qpair failed and we were unable to recover it.
00:26:14.660  [2024-12-09 04:16:43.220352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.660  [2024-12-09 04:16:43.220448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.660  [2024-12-09 04:16:43.220477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.660  [2024-12-09 04:16:43.220495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.660  [2024-12-09 04:16:43.220508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.660  [2024-12-09 04:16:43.220540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.660  qpair failed and we were unable to recover it.
00:26:14.660  [2024-12-09 04:16:43.230391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.660  [2024-12-09 04:16:43.230496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.660  [2024-12-09 04:16:43.230522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.660  [2024-12-09 04:16:43.230537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.660  [2024-12-09 04:16:43.230556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.660  [2024-12-09 04:16:43.230588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.660  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.240368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.240465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.240490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.240504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.918  [2024-12-09 04:16:43.240517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.918  [2024-12-09 04:16:43.240548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.918  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.250555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.250646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.250672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.250687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.918  [2024-12-09 04:16:43.250701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.918  [2024-12-09 04:16:43.250730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.918  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.260465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.260581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.260609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.260630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.918  [2024-12-09 04:16:43.260644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.918  [2024-12-09 04:16:43.260675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.918  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.270588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.270678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.270704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.270720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.918  [2024-12-09 04:16:43.270733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.918  [2024-12-09 04:16:43.270765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.918  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.280496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.280587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.280612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.280627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.918  [2024-12-09 04:16:43.280641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.918  [2024-12-09 04:16:43.280672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.918  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.290558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.290646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.290671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.290686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.918  [2024-12-09 04:16:43.290700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.918  [2024-12-09 04:16:43.290730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.918  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.300558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.300640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.300666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.300681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.918  [2024-12-09 04:16:43.300694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.918  [2024-12-09 04:16:43.300730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.918  qpair failed and we were unable to recover it.
00:26:14.918  [2024-12-09 04:16:43.310603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.918  [2024-12-09 04:16:43.310726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.918  [2024-12-09 04:16:43.310751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.918  [2024-12-09 04:16:43.310765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.310778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.310808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.320625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.320741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.320768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.320783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.320796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.320827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.330643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.330734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.330760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.330775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.330787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.330819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.340743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.340837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.340866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.340883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.340897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.340929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.350754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.350854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.350880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.350895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.350908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.350938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.360743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.360846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.360871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.360886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.360899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.360929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.370748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.370831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.370856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.370871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.370884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.370915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.380781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.380872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.380898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.380912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.380925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.380956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.390855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.390968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.390993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.391014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.391029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.391060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.400830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.400925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.400950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.400965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.400977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.401007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.410856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.410954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.410979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.410995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.411007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.411037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.420908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.420996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.421022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.421036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.421049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.421080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.430938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.431030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.431055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.431071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.431084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.919  [2024-12-09 04:16:43.431120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.919  qpair failed and we were unable to recover it.
00:26:14.919  [2024-12-09 04:16:43.440950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.919  [2024-12-09 04:16:43.441037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.919  [2024-12-09 04:16:43.441063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.919  [2024-12-09 04:16:43.441078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.919  [2024-12-09 04:16:43.441091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.920  [2024-12-09 04:16:43.441121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.920  qpair failed and we were unable to recover it.
00:26:14.920  [2024-12-09 04:16:43.450986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.920  [2024-12-09 04:16:43.451075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.920  [2024-12-09 04:16:43.451100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.920  [2024-12-09 04:16:43.451115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.920  [2024-12-09 04:16:43.451128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.920  [2024-12-09 04:16:43.451159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.920  qpair failed and we were unable to recover it.
00:26:14.920  [2024-12-09 04:16:43.461009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.920  [2024-12-09 04:16:43.461099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.920  [2024-12-09 04:16:43.461124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.920  [2024-12-09 04:16:43.461139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.920  [2024-12-09 04:16:43.461151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.920  [2024-12-09 04:16:43.461182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.920  qpair failed and we were unable to recover it.
00:26:14.920  [2024-12-09 04:16:43.471027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.920  [2024-12-09 04:16:43.471115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.920  [2024-12-09 04:16:43.471141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.920  [2024-12-09 04:16:43.471157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.920  [2024-12-09 04:16:43.471170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.920  [2024-12-09 04:16:43.471201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.920  qpair failed and we were unable to recover it.
00:26:14.920  [2024-12-09 04:16:43.481164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.920  [2024-12-09 04:16:43.481262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.920  [2024-12-09 04:16:43.481296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.920  [2024-12-09 04:16:43.481312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.920  [2024-12-09 04:16:43.481325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.920  [2024-12-09 04:16:43.481357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.920  qpair failed and we were unable to recover it.
00:26:14.920  [2024-12-09 04:16:43.491131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:14.920  [2024-12-09 04:16:43.491236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:14.920  [2024-12-09 04:16:43.491265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:14.920  [2024-12-09 04:16:43.491292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:14.920  [2024-12-09 04:16:43.491307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:14.920  [2024-12-09 04:16:43.491339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:14.920  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.501212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.501338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.501364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.501379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.501393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.501424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.511177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.511279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.511304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.511320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.511333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.511363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.521210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.521305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.521336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.521352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.521365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.521396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.531240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.531344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.531369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.531385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.531398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.531429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.541236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.541324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.541349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.541363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.541376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.541407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.551331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.551425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.551454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.551470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.551483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.551520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.561323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.561449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.561474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.561489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.561507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.561540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.571328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.571416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.571442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.571457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.571470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.571500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.581461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.178  [2024-12-09 04:16:43.581549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.178  [2024-12-09 04:16:43.581577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.178  [2024-12-09 04:16:43.581593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.178  [2024-12-09 04:16:43.581606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.178  [2024-12-09 04:16:43.581636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.178  qpair failed and we were unable to recover it.
00:26:15.178  [2024-12-09 04:16:43.591481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.179  [2024-12-09 04:16:43.591577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.179  [2024-12-09 04:16:43.591601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.179  [2024-12-09 04:16:43.591615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.179  [2024-12-09 04:16:43.591628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.179  [2024-12-09 04:16:43.591658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.179  qpair failed and we were unable to recover it.
00:26:15.179  [2024-12-09 04:16:43.601413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.179  [2024-12-09 04:16:43.601509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.179  [2024-12-09 04:16:43.601533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.179  [2024-12-09 04:16:43.601548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.179  [2024-12-09 04:16:43.601560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.179  [2024-12-09 04:16:43.601592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.179  qpair failed and we were unable to recover it.
00:26:15.179  [2024-12-09 04:16:43.611443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.179  [2024-12-09 04:16:43.611525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.179  [2024-12-09 04:16:43.611549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.179  [2024-12-09 04:16:43.611564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.179  [2024-12-09 04:16:43.611577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.179  [2024-12-09 04:16:43.611608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.179  qpair failed and we were unable to recover it.
00:26:15.179  [2024-12-09 04:16:43.621474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.179  [2024-12-09 04:16:43.621597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.179  [2024-12-09 04:16:43.621624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.179  [2024-12-09 04:16:43.621639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.179  [2024-12-09 04:16:43.621651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.179  [2024-12-09 04:16:43.621681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.179  qpair failed and we were unable to recover it.
00:26:15.179  [2024-12-09 04:16:43.631503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.179  [2024-12-09 04:16:43.631628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.179  [2024-12-09 04:16:43.631655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.179  [2024-12-09 04:16:43.631670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.179  [2024-12-09 04:16:43.631683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b4000b90
00:26:15.179  [2024-12-09 04:16:43.631714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:15.179  qpair failed and we were unable to recover it.
00:26:15.179  [2024-12-09 04:16:43.641558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.179  [2024-12-09 04:16:43.641647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.179  [2024-12-09 04:16:43.641678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.179  [2024-12-09 04:16:43.641694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.179  [2024-12-09 04:16:43.641707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b8000b90
00:26:15.179  [2024-12-09 04:16:43.641738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:15.179  qpair failed and we were unable to recover it.
00:26:15.179  [2024-12-09 04:16:43.651563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:26:15.179  [2024-12-09 04:16:43.651659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:26:15.179  [2024-12-09 04:16:43.651693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:26:15.179  [2024-12-09 04:16:43.651709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:26:15.179  [2024-12-09 04:16:43.651722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5b8000b90
00:26:15.179  [2024-12-09 04:16:43.651752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:15.179  qpair failed and we were unable to recover it.
00:26:15.179  [2024-12-09 04:16:43.651856] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed
00:26:15.179  A controller has encountered a failure and is being reset.
00:26:15.437  Controller properly reset.
00:26:15.437  Initializing NVMe Controllers
00:26:15.437  Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:26:15.437  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:26:15.437  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0
00:26:15.437  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1
00:26:15.437  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2
00:26:15.437  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3
00:26:15.437  Initialization complete. Launching workers.
00:26:15.437  Starting thread on core 1
00:26:15.437  Starting thread on core 2
00:26:15.437  Starting thread on core 3
00:26:15.437  Starting thread on core 0
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync
00:26:15.437  
00:26:15.437  real	0m10.834s
00:26:15.437  user	0m19.297s
00:26:15.437  sys	0m5.305s
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:15.437  ************************************
00:26:15.437  END TEST nvmf_target_disconnect_tc2
00:26:15.437  ************************************
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']'
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:15.437  rmmod nvme_tcp
00:26:15.437  rmmod nvme_fabrics
00:26:15.437  rmmod nvme_keyring
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 345755 ']'
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 345755
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 345755 ']'
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 345755
00:26:15.437    04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:15.437    04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 345755
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 345755'
00:26:15.437  killing process with pid 345755
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 345755
00:26:15.437   04:16:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 345755
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:15.695   04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:15.695    04:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:18.234   04:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:18.234  
00:26:18.234  real	0m15.676s
00:26:18.234  user	0m45.756s
00:26:18.234  sys	0m7.346s
00:26:18.234   04:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:18.234   04:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:26:18.234  ************************************
00:26:18.234  END TEST nvmf_target_disconnect
00:26:18.234  ************************************
00:26:18.234   04:16:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:26:18.234  
00:26:18.234  real	5m8.902s
00:26:18.234  user	10m54.051s
00:26:18.234  sys	1m14.523s
00:26:18.234   04:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:18.234   04:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:18.234  ************************************
00:26:18.234  END TEST nvmf_host
00:26:18.234  ************************************
00:26:18.234   04:16:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]]
00:26:18.234   04:16:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]]
00:26:18.234   04:16:46 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode
00:26:18.234   04:16:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:26:18.234   04:16:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:18.234   04:16:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:26:18.234  ************************************
00:26:18.234  START TEST nvmf_target_core_interrupt_mode
00:26:18.234  ************************************
00:26:18.234   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode
00:26:18.234  * Looking for test storage...
00:26:18.234  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-:
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-:
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<'
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1
00:26:18.234    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2
00:26:18.234     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:18.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.235  		--rc genhtml_branch_coverage=1
00:26:18.235  		--rc genhtml_function_coverage=1
00:26:18.235  		--rc genhtml_legend=1
00:26:18.235  		--rc geninfo_all_blocks=1
00:26:18.235  		--rc geninfo_unexecuted_blocks=1
00:26:18.235  		
00:26:18.235  		'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:18.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.235  		--rc genhtml_branch_coverage=1
00:26:18.235  		--rc genhtml_function_coverage=1
00:26:18.235  		--rc genhtml_legend=1
00:26:18.235  		--rc geninfo_all_blocks=1
00:26:18.235  		--rc geninfo_unexecuted_blocks=1
00:26:18.235  		
00:26:18.235  		'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:18.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.235  		--rc genhtml_branch_coverage=1
00:26:18.235  		--rc genhtml_function_coverage=1
00:26:18.235  		--rc genhtml_legend=1
00:26:18.235  		--rc geninfo_all_blocks=1
00:26:18.235  		--rc geninfo_unexecuted_blocks=1
00:26:18.235  		
00:26:18.235  		'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:18.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.235  		--rc genhtml_branch_coverage=1
00:26:18.235  		--rc genhtml_function_coverage=1
00:26:18.235  		--rc genhtml_legend=1
00:26:18.235  		--rc geninfo_all_blocks=1
00:26:18.235  		--rc geninfo_unexecuted_blocks=1
00:26:18.235  		
00:26:18.235  		'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']'
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:18.235      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.235      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.235      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.235      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH
00:26:18.235      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@")
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]]
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:26:18.235  ************************************
00:26:18.235  START TEST nvmf_abort
00:26:18.235  ************************************
00:26:18.235   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode
00:26:18.235  * Looking for test storage...
00:26:18.235  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version
00:26:18.235     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-:
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-:
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<'
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1
00:26:18.235    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:18.236  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.236  		--rc genhtml_branch_coverage=1
00:26:18.236  		--rc genhtml_function_coverage=1
00:26:18.236  		--rc genhtml_legend=1
00:26:18.236  		--rc geninfo_all_blocks=1
00:26:18.236  		--rc geninfo_unexecuted_blocks=1
00:26:18.236  		
00:26:18.236  		'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:18.236  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.236  		--rc genhtml_branch_coverage=1
00:26:18.236  		--rc genhtml_function_coverage=1
00:26:18.236  		--rc genhtml_legend=1
00:26:18.236  		--rc geninfo_all_blocks=1
00:26:18.236  		--rc geninfo_unexecuted_blocks=1
00:26:18.236  		
00:26:18.236  		'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:18.236  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.236  		--rc genhtml_branch_coverage=1
00:26:18.236  		--rc genhtml_function_coverage=1
00:26:18.236  		--rc genhtml_legend=1
00:26:18.236  		--rc geninfo_all_blocks=1
00:26:18.236  		--rc geninfo_unexecuted_blocks=1
00:26:18.236  		
00:26:18.236  		'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:18.236  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:18.236  		--rc genhtml_branch_coverage=1
00:26:18.236  		--rc genhtml_function_coverage=1
00:26:18.236  		--rc genhtml_legend=1
00:26:18.236  		--rc geninfo_all_blocks=1
00:26:18.236  		--rc geninfo_unexecuted_blocks=1
00:26:18.236  		
00:26:18.236  		'
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:18.236     04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:18.236      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.236      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.236      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.236      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH
00:26:18.236      04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:18.236    04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable
00:26:18.236   04:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=()
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=()
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=()
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=()
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=()
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:26:20.772  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:26:20.772  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:26:20.772  Found net devices under 0000:0a:00.0: cvl_0_0
00:26:20.772   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:26:20.773  Found net devices under 0000:0a:00.1: cvl_0_1
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:20.773  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:20.773  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms
00:26:20.773  
00:26:20.773  --- 10.0.0.2 ping statistics ---
00:26:20.773  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:20.773  rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:20.773  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:20.773  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms
00:26:20.773  
00:26:20.773  --- 10.0.0.1 ping statistics ---
00:26:20.773  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:20.773  rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=348571
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 348571
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 348571 ']'
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:20.773  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:20.773   04:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.773  [2024-12-09 04:16:48.966010] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:26:20.773  [2024-12-09 04:16:48.967181] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:26:20.773  [2024-12-09 04:16:48.967244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:20.773  [2024-12-09 04:16:49.044093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:26:20.773  [2024-12-09 04:16:49.103366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:20.773  [2024-12-09 04:16:49.103432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:20.773  [2024-12-09 04:16:49.103454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:20.773  [2024-12-09 04:16:49.103465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:20.773  [2024-12-09 04:16:49.103475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:20.773  [2024-12-09 04:16:49.104948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:26:20.773  [2024-12-09 04:16:49.104979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:26:20.773  [2024-12-09 04:16:49.104984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:20.773  [2024-12-09 04:16:49.202922] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:26:20.773  [2024-12-09 04:16:49.203109] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:26:20.773  [2024-12-09 04:16:49.203141] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:26:20.773  [2024-12-09 04:16:49.203357] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.773  [2024-12-09 04:16:49.253733] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.773  Malloc0
00:26:20.773   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.774  Delay0
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.774  [2024-12-09 04:16:49.321965] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:20.774   04:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:26:21.032  [2024-12-09 04:16:49.391086] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:26:22.933  Initializing NVMe Controllers
00:26:22.933  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:26:22.933  controller IO queue size 128 less than required
00:26:22.933  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:26:22.933  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:26:22.933  Initialization complete. Launching workers.
00:26:22.933  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29235
00:26:22.933  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29292, failed to submit 66
00:26:22.933  	 success 29235, unsuccessful 57, failed 0
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:22.933   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:22.933  rmmod nvme_tcp
00:26:22.933  rmmod nvme_fabrics
00:26:22.933  rmmod nvme_keyring
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 348571 ']'
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 348571
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 348571 ']'
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 348571
00:26:23.191    04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:23.191    04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348571
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348571'
00:26:23.191  killing process with pid 348571
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 348571
00:26:23.191   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 348571
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:23.450   04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:23.450    04:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:25.356  
00:26:25.356  real	0m7.337s
00:26:25.356  user	0m9.215s
00:26:25.356  sys	0m2.891s
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:26:25.356  ************************************
00:26:25.356  END TEST nvmf_abort
00:26:25.356  ************************************
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:26:25.356  ************************************
00:26:25.356  START TEST nvmf_ns_hotplug_stress
00:26:25.356  ************************************
00:26:25.356   04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode
00:26:25.356  * Looking for test storage...
00:26:25.615  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:26:25.615    04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:25.615     04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:26:25.615     04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-:
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-:
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<'
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:25.615     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:25.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.615  		--rc genhtml_branch_coverage=1
00:26:25.615  		--rc genhtml_function_coverage=1
00:26:25.615  		--rc genhtml_legend=1
00:26:25.615  		--rc geninfo_all_blocks=1
00:26:25.615  		--rc geninfo_unexecuted_blocks=1
00:26:25.615  		
00:26:25.615  		'
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:25.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.615  		--rc genhtml_branch_coverage=1
00:26:25.615  		--rc genhtml_function_coverage=1
00:26:25.615  		--rc genhtml_legend=1
00:26:25.615  		--rc geninfo_all_blocks=1
00:26:25.615  		--rc geninfo_unexecuted_blocks=1
00:26:25.615  		
00:26:25.615  		'
00:26:25.615    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:25.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.616  		--rc genhtml_branch_coverage=1
00:26:25.616  		--rc genhtml_function_coverage=1
00:26:25.616  		--rc genhtml_legend=1
00:26:25.616  		--rc geninfo_all_blocks=1
00:26:25.616  		--rc geninfo_unexecuted_blocks=1
00:26:25.616  		
00:26:25.616  		'
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:25.616  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.616  		--rc genhtml_branch_coverage=1
00:26:25.616  		--rc genhtml_function_coverage=1
00:26:25.616  		--rc genhtml_legend=1
00:26:25.616  		--rc geninfo_all_blocks=1
00:26:25.616  		--rc geninfo_unexecuted_blocks=1
00:26:25.616  		
00:26:25.616  		'
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:25.616     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:25.616     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:25.616     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:26:25.616     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:25.616     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:25.616     04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:25.616      04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.616      04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.616      04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.616      04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH
00:26:25.616      04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:25.616    04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:26:25.616   04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=()
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=()
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=()
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=()
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:26:28.147  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:28.147   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:26:28.148  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:26:28.148  Found net devices under 0000:0a:00.0: cvl_0_0
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:26:28.148  Found net devices under 0000:0a:00.1: cvl_0_1
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:28.148  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:28.148  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms
00:26:28.148  
00:26:28.148  --- 10.0.0.2 ping statistics ---
00:26:28.148  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:28.148  rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:28.148  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:28.148  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms
00:26:28.148  
00:26:28.148  --- 10.0.0.1 ping statistics ---
00:26:28.148  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:28.148  rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=350790
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 350790
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 350790 ']'
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:28.148  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:26:28.148  [2024-12-09 04:16:56.354947] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:26:28.148  [2024-12-09 04:16:56.356045] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:26:28.148  [2024-12-09 04:16:56.356097] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:28.148  [2024-12-09 04:16:56.428814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:26:28.148  [2024-12-09 04:16:56.484058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:28.148  [2024-12-09 04:16:56.484117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:28.148  [2024-12-09 04:16:56.484140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:28.148  [2024-12-09 04:16:56.484151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:28.148  [2024-12-09 04:16:56.484161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:28.148  [2024-12-09 04:16:56.485631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:26:28.148  [2024-12-09 04:16:56.485690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:26:28.148  [2024-12-09 04:16:56.485693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:28.148  [2024-12-09 04:16:56.571963] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:26:28.148  [2024-12-09 04:16:56.572172] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:26:28.148  [2024-12-09 04:16:56.572205] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:26:28.148  [2024-12-09 04:16:56.572452] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:26:28.148   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:26:28.407  [2024-12-09 04:16:56.906434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:28.407   04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:26:28.972   04:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:28.972  [2024-12-09 04:16:57.538805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:29.230   04:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:26:29.487   04:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:26:29.745  Malloc0
00:26:29.745   04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:26:30.002  Delay0
00:26:30.002   04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:30.261   04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:26:30.827  NULL1
00:26:30.827   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:26:31.084   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=351207
00:26:31.084   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:26:31.084   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:31.084   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:31.341   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:31.599   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:26:31.599   04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:26:31.856  true
00:26:31.856   04:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:31.856   04:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:32.114   04:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:32.371   04:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:26:32.371   04:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:26:32.632  true
00:26:32.632   04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:32.632   04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:33.199   04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:33.199   04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:26:33.199   04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:26:33.456  true
00:26:33.456   04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:33.456   04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:34.389  Read completed with error (sct=0, sc=11)
00:26:34.389   04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:34.389  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:34.389  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:34.647   04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:26:34.647   04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:26:34.904  true
00:26:34.904   04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:34.904   04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:35.161   04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:35.419   04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:26:35.419   04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:26:35.677  true
00:26:35.677   04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:35.677   04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:36.610   04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:36.867   04:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:26:36.867   04:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:26:37.123  true
00:26:37.123   04:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:37.123   04:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:37.379   04:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:37.636   04:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:26:37.636   04:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:26:37.893  true
00:26:37.893   04:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:37.893   04:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:38.824  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:38.824   04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:38.824  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:39.080   04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:26:39.080   04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:26:39.336  true
00:26:39.336   04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:39.336   04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:39.593   04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:39.850   04:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:26:39.850   04:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:26:40.107  true
00:26:40.107   04:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:40.107   04:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:40.364   04:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:40.620   04:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:26:40.620   04:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:26:40.878  true
00:26:40.878   04:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:40.878   04:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:41.811  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:41.811   04:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:41.811  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:42.070   04:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:26:42.070   04:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:26:42.328  true
00:26:42.328   04:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:42.328   04:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:42.586   04:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:42.844   04:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:26:42.844   04:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:26:43.103  true
00:26:43.103   04:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:43.103   04:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:43.360   04:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:43.618   04:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:26:43.618   04:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:26:43.876  true
00:26:44.135   04:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:44.135   04:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:45.070   04:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:45.070  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:45.328   04:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:26:45.328   04:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:26:45.585  true
00:26:45.585   04:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:45.586   04:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:45.843   04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:46.101   04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:26:46.101   04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:26:46.359  true
00:26:46.359   04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:46.359   04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:46.618   04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:46.875   04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:26:46.875   04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:26:47.132  true
00:26:47.132   04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:47.132   04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:48.062   04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:48.062  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:48.062  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:48.319   04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:26:48.319   04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:26:48.577  true
00:26:48.577   04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:48.577   04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:48.834   04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:49.091   04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:26:49.091   04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:26:49.349  true
00:26:49.606   04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:49.606   04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:49.863   04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:50.118   04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:26:50.118   04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:26:50.374  true
00:26:50.374   04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:50.374   04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:51.305   04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:51.563   04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:26:51.563   04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:26:51.820  true
00:26:51.820   04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:51.820   04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:52.077   04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:52.334   04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:26:52.334   04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:26:52.592  true
00:26:52.592   04:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:52.592   04:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:52.849   04:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:53.107   04:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:26:53.107   04:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:26:53.365  true
00:26:53.365   04:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:53.365   04:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:54.298   04:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:54.556   04:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:26:54.556   04:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:26:54.813  true
00:26:54.813   04:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:54.813   04:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:55.071   04:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:55.328   04:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:26:55.328   04:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:26:55.586  true
00:26:55.586   04:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:55.586   04:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:55.843   04:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:56.102   04:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:26:56.102   04:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:26:56.359  true
00:26:56.359   04:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:56.359   04:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:57.297  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:57.297   04:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:57.554  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:26:57.812   04:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:26:57.812   04:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:26:57.812  true
00:26:58.069   04:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:58.069   04:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:58.326   04:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:58.583   04:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:26:58.583   04:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:26:58.840  true
00:26:58.840   04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:58.840   04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:26:59.098   04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:26:59.355   04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:26:59.355   04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:26:59.614  true
00:26:59.614   04:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:26:59.614   04:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:00.557   04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:27:00.814   04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:27:00.814   04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:27:01.071  true
00:27:01.071   04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:27:01.071   04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:01.329  Initializing NVMe Controllers
00:27:01.329  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:01.329  Controller IO queue size 128, less than required.
00:27:01.329  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:01.329  Controller IO queue size 128, less than required.
00:27:01.329  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:01.329  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:27:01.329  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:27:01.329  Initialization complete. Launching workers.
00:27:01.329  ========================================================
00:27:01.329                                                                                                               Latency(us)
00:27:01.329  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:01.329  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     426.43       0.21  111160.41    3191.10 1013416.46
00:27:01.329  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    7765.43       3.79   16433.40    2531.64  479775.08
00:27:01.329  ========================================================
00:27:01.329  Total                                                                    :    8191.87       4.00   21364.47    2531.64 1013416.46
00:27:01.329  
00:27:01.329   04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:27:01.587   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030
00:27:01.587   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030
00:27:01.843  true
00:27:01.843   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 351207
00:27:01.843  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (351207) - No such process
00:27:01.843   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 351207
00:27:01.843   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:02.101   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:02.358   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:27:02.358   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=()
00:27:02.358   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:27:02.358   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:02.358   04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:27:02.615  null0
00:27:02.615   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:02.615   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:02.615   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:27:03.180  null1
00:27:03.181   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:03.181   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:03.181   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:27:03.438  null2
00:27:03.438   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:03.438   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:03.438   04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:27:03.696  null3
00:27:03.696   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:03.696   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:03.696   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:27:03.954  null4
00:27:03.954   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:03.954   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:03.954   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:27:04.212  null5
00:27:04.212   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:04.212   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:04.212   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:27:04.470  null6
00:27:04.470   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:04.470   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:04.470   04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:27:04.729  null7
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.729   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 355222 355223 355225 355227 355229 355231 355233 355235
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:04.730   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:04.988   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.246   04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:05.505   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.763   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.764   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:05.764   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:05.764   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:05.764   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.022   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:06.279   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:06.538   04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:06.796   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.054   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:07.312   04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.569   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:07.827   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:08.084   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.358   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.359   04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:08.616   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:08.875   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:09.133   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:09.391   04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:09.649   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:09.649   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:09.649   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:09.649   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:09.907   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:09.907   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:09.907   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:09.907   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.165   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:27:10.423   04:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:10.681  rmmod nvme_tcp
00:27:10.681  rmmod nvme_fabrics
00:27:10.681  rmmod nvme_keyring
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 350790 ']'
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 350790
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 350790 ']'
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 350790
00:27:10.681    04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:10.681    04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350790
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350790'
00:27:10.681  killing process with pid 350790
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 350790
00:27:10.681   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 350790
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:10.940   04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:10.940    04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:13.471  
00:27:13.471  real	0m47.617s
00:27:13.471  user	3m18.701s
00:27:13.471  sys	0m22.246s
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:27:13.471  ************************************
00:27:13.471  END TEST nvmf_ns_hotplug_stress
00:27:13.471  ************************************
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:27:13.471  ************************************
00:27:13.471  START TEST nvmf_delete_subsystem
00:27:13.471  ************************************
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode
00:27:13.471  * Looking for test storage...
00:27:13.471  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-:
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-:
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<'
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:13.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.471  		--rc genhtml_branch_coverage=1
00:27:13.471  		--rc genhtml_function_coverage=1
00:27:13.471  		--rc genhtml_legend=1
00:27:13.471  		--rc geninfo_all_blocks=1
00:27:13.471  		--rc geninfo_unexecuted_blocks=1
00:27:13.471  		
00:27:13.471  		'
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:13.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.471  		--rc genhtml_branch_coverage=1
00:27:13.471  		--rc genhtml_function_coverage=1
00:27:13.471  		--rc genhtml_legend=1
00:27:13.471  		--rc geninfo_all_blocks=1
00:27:13.471  		--rc geninfo_unexecuted_blocks=1
00:27:13.471  		
00:27:13.471  		'
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:13.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.471  		--rc genhtml_branch_coverage=1
00:27:13.471  		--rc genhtml_function_coverage=1
00:27:13.471  		--rc genhtml_legend=1
00:27:13.471  		--rc geninfo_all_blocks=1
00:27:13.471  		--rc geninfo_unexecuted_blocks=1
00:27:13.471  		
00:27:13.471  		'
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:13.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.471  		--rc genhtml_branch_coverage=1
00:27:13.471  		--rc genhtml_function_coverage=1
00:27:13.471  		--rc genhtml_legend=1
00:27:13.471  		--rc geninfo_all_blocks=1
00:27:13.471  		--rc geninfo_unexecuted_blocks=1
00:27:13.471  		
00:27:13.471  		'
00:27:13.471   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:13.471     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:13.471    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:13.472     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:13.472     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob
00:27:13.472     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:13.472     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:13.472     04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:13.472      04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:13.472      04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:13.472      04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:13.472      04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH
00:27:13.472      04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:13.472    04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable
00:27:13.472   04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=()
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=()
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=()
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=()
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=()
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:15.478   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:27:15.479  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:27:15.479  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:27:15.479  Found net devices under 0000:0a:00.0: cvl_0_0
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:27:15.479  Found net devices under 0000:0a:00.1: cvl_0_1
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:15.479  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:15.479  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms
00:27:15.479  
00:27:15.479  --- 10.0.0.2 ping statistics ---
00:27:15.479  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:15.479  rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:15.479  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:15.479  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms
00:27:15.479  
00:27:15.479  --- 10.0.0.1 ping statistics ---
00:27:15.479  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:15.479  rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=357989
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 357989
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 357989 ']'
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:15.479  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:15.479   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:15.480   04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.480  [2024-12-09 04:17:43.903829] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:27:15.480  [2024-12-09 04:17:43.904953] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:27:15.480  [2024-12-09 04:17:43.905027] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:15.480  [2024-12-09 04:17:43.980375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:27:15.800  [2024-12-09 04:17:44.040198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:15.800  [2024-12-09 04:17:44.040281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:15.800  [2024-12-09 04:17:44.040300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:15.800  [2024-12-09 04:17:44.040312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:15.800  [2024-12-09 04:17:44.040322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:15.800  [2024-12-09 04:17:44.045294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:15.800  [2024-12-09 04:17:44.045305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:15.800  [2024-12-09 04:17:44.140530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:27:15.800  [2024-12-09 04:17:44.140550] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:27:15.800  [2024-12-09 04:17:44.140793] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.800  [2024-12-09 04:17:44.189995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.800  [2024-12-09 04:17:44.210233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.800  NULL1
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.800  Delay0
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=358131
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2
00:27:15.800   04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:27:15.800  [2024-12-09 04:17:44.288014] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:27:17.782   04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:27:17.782   04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:17.782   04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  [2024-12-09 04:17:46.457401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0d0000c40 is same with the state(6) to be set
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  starting I/O failed: -6
00:27:18.040  Write completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.040  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  starting I/O failed: -6
00:27:18.041  [2024-12-09 04:17:46.458093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0860 is same with the state(6) to be set
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Read completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.041  Write completed with error (sct=0, sc=8)
00:27:18.974  [2024-12-09 04:17:47.423916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c19b0 is same with the state(6) to be set
00:27:18.974  Read completed with error (sct=0, sc=8)
00:27:18.974  Read completed with error (sct=0, sc=8)
00:27:18.974  Read completed with error (sct=0, sc=8)
00:27:18.974  Read completed with error (sct=0, sc=8)
00:27:18.974  Read completed with error (sct=0, sc=8)
00:27:18.974  Read completed with error (sct=0, sc=8)
00:27:18.974  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  [2024-12-09 04:17:47.460181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0d000d7e0 is same with the state(6) to be set
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  [2024-12-09 04:17:47.460413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c02c0 is same with the state(6) to be set
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  [2024-12-09 04:17:47.461339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0680 is same with the state(6) to be set
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Write completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  Read completed with error (sct=0, sc=8)
00:27:18.975  [2024-12-09 04:17:47.461500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0d000d020 is same with the state(6) to be set
00:27:18.975  Initializing NVMe Controllers
00:27:18.975  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:18.975  Controller IO queue size 128, less than required.
00:27:18.975  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:18.975  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:27:18.975  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:27:18.975  Initialization complete. Launching workers.
00:27:18.975  ========================================================
00:27:18.975                                                                                                               Latency(us)
00:27:18.975  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:18.975  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     161.27       0.08  950662.46     410.35 2002900.57
00:27:18.975  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     163.75       0.08  908894.02     599.70 1012446.29
00:27:18.975  ========================================================
00:27:18.975  Total                                                                    :     325.03       0.16  929618.82     410.35 2002900.57
00:27:18.975  
00:27:18.975  [2024-12-09 04:17:47.462597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c19b0 (9): Bad file descriptor
00:27:18.975   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:18.975  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:27:18.975   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0
00:27:18.975   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 358131
00:27:18.975   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:27:19.541   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:27:19.541   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 358131
00:27:19.541  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (358131) - No such process
00:27:19.541   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 358131
00:27:19.541   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0
00:27:19.541   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 358131
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:19.542    04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 358131
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:19.542  [2024-12-09 04:17:47.982181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=358544
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:27:19.542   04:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:27:19.542  [2024-12-09 04:17:48.046772] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:27:20.107   04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:27:20.108   04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:20.108   04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:27:20.673   04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:27:20.673   04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:20.673   04:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:27:20.930   04:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:27:20.930   04:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:20.930   04:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:27:21.497   04:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:27:21.497   04:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:21.497   04:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:27:22.060   04:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:27:22.060   04:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:22.060   04:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:27:22.624   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:27:22.624   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:22.624   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:27:22.882  Initializing NVMe Controllers
00:27:22.882  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:22.882  Controller IO queue size 128, less than required.
00:27:22.882  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:22.882  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:27:22.882  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:27:22.882  Initialization complete. Launching workers.
00:27:22.882  ========================================================
00:27:22.882                                                                                                               Latency(us)
00:27:22.882  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:22.882  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1003378.96 1000172.76 1041103.06
00:27:22.882  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1006492.94 1000265.66 1043435.22
00:27:22.882  ========================================================
00:27:22.882  Total                                                                    :     256.00       0.12 1004935.95 1000172.76 1043435.22
00:27:22.882  
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 358544
00:27:23.139  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (358544) - No such process
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 358544
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:23.139  rmmod nvme_tcp
00:27:23.139  rmmod nvme_fabrics
00:27:23.139  rmmod nvme_keyring
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 357989 ']'
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 357989
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 357989 ']'
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 357989
00:27:23.139    04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:23.139    04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357989
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357989'
00:27:23.139  killing process with pid 357989
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 357989
00:27:23.139   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 357989
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:23.399   04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:23.399    04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:25.301   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:25.559  
00:27:25.559  real	0m12.334s
00:27:25.559  user	0m24.999s
00:27:25.559  sys	0m3.631s
00:27:25.559   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:25.559   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:27:25.559  ************************************
00:27:25.559  END TEST nvmf_delete_subsystem
00:27:25.559  ************************************
00:27:25.559   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode
00:27:25.559   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:27:25.559   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:25.559   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:27:25.559  ************************************
00:27:25.559  START TEST nvmf_host_management
00:27:25.559  ************************************
00:27:25.559   04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode
00:27:25.559  * Looking for test storage...
00:27:25.559  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:27:25.559    04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:25.559     04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version
00:27:25.559     04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:25.559    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:25.559    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:25.559    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:25.559    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-:
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-:
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:25.560  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:25.560  		--rc genhtml_branch_coverage=1
00:27:25.560  		--rc genhtml_function_coverage=1
00:27:25.560  		--rc genhtml_legend=1
00:27:25.560  		--rc geninfo_all_blocks=1
00:27:25.560  		--rc geninfo_unexecuted_blocks=1
00:27:25.560  		
00:27:25.560  		'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:25.560  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:25.560  		--rc genhtml_branch_coverage=1
00:27:25.560  		--rc genhtml_function_coverage=1
00:27:25.560  		--rc genhtml_legend=1
00:27:25.560  		--rc geninfo_all_blocks=1
00:27:25.560  		--rc geninfo_unexecuted_blocks=1
00:27:25.560  		
00:27:25.560  		'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:25.560  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:25.560  		--rc genhtml_branch_coverage=1
00:27:25.560  		--rc genhtml_function_coverage=1
00:27:25.560  		--rc genhtml_legend=1
00:27:25.560  		--rc geninfo_all_blocks=1
00:27:25.560  		--rc geninfo_unexecuted_blocks=1
00:27:25.560  		
00:27:25.560  		'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:25.560  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:25.560  		--rc genhtml_branch_coverage=1
00:27:25.560  		--rc genhtml_function_coverage=1
00:27:25.560  		--rc genhtml_legend=1
00:27:25.560  		--rc geninfo_all_blocks=1
00:27:25.560  		--rc geninfo_unexecuted_blocks=1
00:27:25.560  		
00:27:25.560  		'
00:27:25.560   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:25.560     04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:25.560      04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:25.560      04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:25.560      04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:25.560      04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH
00:27:25.560      04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:25.560    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:25.561    04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable
00:27:25.561   04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.089   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=()
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=()
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=()
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=()
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=()
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:27:28.090  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:27:28.090  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:27:28.090  Found net devices under 0000:0a:00.0: cvl_0_0
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:27:28.090  Found net devices under 0000:0a:00.1: cvl_0_1
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:28.090   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:28.091  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:28.091  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms
00:27:28.091  
00:27:28.091  --- 10.0.0.2 ping statistics ---
00:27:28.091  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:28.091  rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:28.091  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:28.091  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms
00:27:28.091  
00:27:28.091  --- 10.0.0.1 ping statistics ---
00:27:28.091  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:28.091  rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=360893
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 360893
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 360893 ']'
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:28.091  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:28.091   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.091  [2024-12-09 04:17:56.456677] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:27:28.091  [2024-12-09 04:17:56.457860] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:27:28.091  [2024-12-09 04:17:56.457925] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:28.091  [2024-12-09 04:17:56.537129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:27:28.091  [2024-12-09 04:17:56.595891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:28.091  [2024-12-09 04:17:56.595938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:28.091  [2024-12-09 04:17:56.595961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:28.091  [2024-12-09 04:17:56.595972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:28.091  [2024-12-09 04:17:56.595981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:28.091  [2024-12-09 04:17:56.597605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:27:28.091  [2024-12-09 04:17:56.597728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:27:28.091  [2024-12-09 04:17:56.597785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:27:28.091  [2024-12-09 04:17:56.597788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:28.348  [2024-12-09 04:17:56.687940] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:27:28.348  [2024-12-09 04:17:56.688141] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:27:28.348  [2024-12-09 04:17:56.688443] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:27:28.348  [2024-12-09 04:17:56.689023] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:27:28.348  [2024-12-09 04:17:56.689230] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.348  [2024-12-09 04:17:56.734440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:28.348   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.349  Malloc0
00:27:28.349  [2024-12-09 04:17:56.810681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=361052
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 361052 /var/tmp/bdevperf.sock
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 361052 ']'
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:27:28.349    04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:28.349    04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:27:28.349  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:27:28.349    04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:28.349   04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.349    04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:28.349    04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:28.349  {
00:27:28.349    "params": {
00:27:28.349      "name": "Nvme$subsystem",
00:27:28.349      "trtype": "$TEST_TRANSPORT",
00:27:28.349      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:28.349      "adrfam": "ipv4",
00:27:28.349      "trsvcid": "$NVMF_PORT",
00:27:28.349      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:28.349      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:28.349      "hdgst": ${hdgst:-false},
00:27:28.349      "ddgst": ${ddgst:-false}
00:27:28.349    },
00:27:28.349    "method": "bdev_nvme_attach_controller"
00:27:28.349  }
00:27:28.349  EOF
00:27:28.349  )")
00:27:28.349     04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:27:28.349    04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:27:28.349     04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:27:28.349     04:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:27:28.349    "params": {
00:27:28.349      "name": "Nvme0",
00:27:28.349      "trtype": "tcp",
00:27:28.349      "traddr": "10.0.0.2",
00:27:28.349      "adrfam": "ipv4",
00:27:28.349      "trsvcid": "4420",
00:27:28.349      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:27:28.349      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:27:28.349      "hdgst": false,
00:27:28.349      "ddgst": false
00:27:28.349    },
00:27:28.349    "method": "bdev_nvme_attach_controller"
00:27:28.349  }'
00:27:28.349  [2024-12-09 04:17:56.892054] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:27:28.349  [2024-12-09 04:17:56.892142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361052 ]
00:27:28.606  [2024-12-09 04:17:56.961496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:28.606  [2024-12-09 04:17:57.020540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:28.864  Running I/O for 10 seconds...
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 ))
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:27:28.864    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:27:28.864    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:27:28.864    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:28.864    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:28.864    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']'
00:27:28.864   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- ))
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:27:29.123    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:27:29.123    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:27:29.123    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:29.123    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:29.123    04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']'
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:29.123   04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1
00:27:29.123  [2024-12-09 04:17:57.607960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:29.123  [2024-12-09 04:17:57.608008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.123  [2024-12-09 04:17:57.608027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:29.123  [2024-12-09 04:17:57.608053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.123  [2024-12-09 04:17:57.608069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:29.123  [2024-12-09 04:17:57.608082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.123  [2024-12-09 04:17:57.608097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:29.123  [2024-12-09 04:17:57.608111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.123  [2024-12-09 04:17:57.608124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cb660 is same with the state(6) to be set
00:27:29.123  [2024-12-09 04:17:57.608230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.608972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.608988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.609016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.609043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.609070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.609098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.609125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.609152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.124  [2024-12-09 04:17:57.609180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.124  [2024-12-09 04:17:57.609193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.609976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.609990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.610003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.610017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.610032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.610047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.610060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.610075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.610092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.125  [2024-12-09 04:17:57.610107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.125  [2024-12-09 04:17:57.610120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.126  [2024-12-09 04:17:57.610135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.126  [2024-12-09 04:17:57.610148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.126  [2024-12-09 04:17:57.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.126  [2024-12-09 04:17:57.610176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.126  [2024-12-09 04:17:57.610191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:29.126  [2024-12-09 04:17:57.610204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:29.126  [2024-12-09 04:17:57.611435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:29.126  task offset: 81920 on job bdev=Nvme0n1 fails
00:27:29.126  
00:27:29.126                                                                                                  Latency(us)
00:27:29.126  
[2024-12-09T03:17:57.702Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:29.126  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:29.126  Job: Nvme0n1 ended in about 0.41 seconds with error
00:27:29.126  	 Verification LBA range: start 0x0 length 0x400
00:27:29.126  	 Nvme0n1             :       0.41    1562.07      97.63     156.21     0.00   36195.30    2718.53   35340.89
00:27:29.126  
[2024-12-09T03:17:57.702Z]  ===================================================================================================================
00:27:29.126  
[2024-12-09T03:17:57.702Z]  Total                       :               1562.07      97.63     156.21     0.00   36195.30    2718.53   35340.89
00:27:29.126  [2024-12-09 04:17:57.613336] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:27:29.126  [2024-12-09 04:17:57.613366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cb660 (9): Bad file descriptor
00:27:29.126  [2024-12-09 04:17:57.657657] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:27:30.059   04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 361052
00:27:30.059  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (361052) - No such process
00:27:30.059   04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true
00:27:30.059   04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:27:30.059   04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:27:30.059    04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:27:30.059    04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:27:30.059    04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:27:30.059    04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:30.059    04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:30.059  {
00:27:30.059    "params": {
00:27:30.059      "name": "Nvme$subsystem",
00:27:30.059      "trtype": "$TEST_TRANSPORT",
00:27:30.059      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:30.059      "adrfam": "ipv4",
00:27:30.059      "trsvcid": "$NVMF_PORT",
00:27:30.059      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:30.059      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:30.059      "hdgst": ${hdgst:-false},
00:27:30.059      "ddgst": ${ddgst:-false}
00:27:30.059    },
00:27:30.059    "method": "bdev_nvme_attach_controller"
00:27:30.059  }
00:27:30.059  EOF
00:27:30.059  )")
00:27:30.059     04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:27:30.059    04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:27:30.059     04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:27:30.059     04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:27:30.059    "params": {
00:27:30.059      "name": "Nvme0",
00:27:30.059      "trtype": "tcp",
00:27:30.059      "traddr": "10.0.0.2",
00:27:30.059      "adrfam": "ipv4",
00:27:30.059      "trsvcid": "4420",
00:27:30.059      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:27:30.060      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:27:30.060      "hdgst": false,
00:27:30.060      "ddgst": false
00:27:30.060    },
00:27:30.060    "method": "bdev_nvme_attach_controller"
00:27:30.060  }'
00:27:30.317  [2024-12-09 04:17:58.655451] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:27:30.318  [2024-12-09 04:17:58.655527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361202 ]
00:27:30.318  [2024-12-09 04:17:58.728710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:30.318  [2024-12-09 04:17:58.788357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:30.575  Running I/O for 1 seconds...
00:27:31.945       1664.00 IOPS,   104.00 MiB/s
00:27:31.945                                                                                                  Latency(us)
00:27:31.945  
[2024-12-09T03:18:00.521Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:31.945  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:31.945  	 Verification LBA range: start 0x0 length 0x400
00:27:31.945  	 Nvme0n1             :       1.02    1697.90     106.12       0.00     0.00   37078.49    6893.42   33593.27
00:27:31.945  
[2024-12-09T03:18:00.521Z]  ===================================================================================================================
00:27:31.945  
[2024-12-09T03:18:00.521Z]  Total                       :               1697.90     106.12       0.00     0.00   37078.49    6893.42   33593.27
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:31.945  rmmod nvme_tcp
00:27:31.945  rmmod nvme_fabrics
00:27:31.945  rmmod nvme_keyring
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 360893 ']'
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 360893
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 360893 ']'
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 360893
00:27:31.945    04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:31.945    04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360893
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360893'
00:27:31.945  killing process with pid 360893
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 360893
00:27:31.945   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 360893
00:27:32.202  [2024-12-09 04:18:00.661759] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:27:32.202   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:32.202   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:32.202   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:32.202   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr
00:27:32.203   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save
00:27:32.203   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:32.203   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore
00:27:32.203   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:32.203   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:32.203   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:32.203   04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:32.203    04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:34.734   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT
00:27:34.735  
00:27:34.735  real	0m8.811s
00:27:34.735  user	0m17.407s
00:27:34.735  sys	0m3.804s
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:27:34.735  ************************************
00:27:34.735  END TEST nvmf_host_management
00:27:34.735  ************************************
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:27:34.735  ************************************
00:27:34.735  START TEST nvmf_lvol
00:27:34.735  ************************************
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode
00:27:34.735  * Looking for test storage...
00:27:34.735  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-:
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-:
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<'
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:34.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:34.735  		--rc genhtml_branch_coverage=1
00:27:34.735  		--rc genhtml_function_coverage=1
00:27:34.735  		--rc genhtml_legend=1
00:27:34.735  		--rc geninfo_all_blocks=1
00:27:34.735  		--rc geninfo_unexecuted_blocks=1
00:27:34.735  		
00:27:34.735  		'
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:34.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:34.735  		--rc genhtml_branch_coverage=1
00:27:34.735  		--rc genhtml_function_coverage=1
00:27:34.735  		--rc genhtml_legend=1
00:27:34.735  		--rc geninfo_all_blocks=1
00:27:34.735  		--rc geninfo_unexecuted_blocks=1
00:27:34.735  		
00:27:34.735  		'
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:34.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:34.735  		--rc genhtml_branch_coverage=1
00:27:34.735  		--rc genhtml_function_coverage=1
00:27:34.735  		--rc genhtml_legend=1
00:27:34.735  		--rc geninfo_all_blocks=1
00:27:34.735  		--rc geninfo_unexecuted_blocks=1
00:27:34.735  		
00:27:34.735  		'
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:34.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:34.735  		--rc genhtml_branch_coverage=1
00:27:34.735  		--rc genhtml_function_coverage=1
00:27:34.735  		--rc genhtml_legend=1
00:27:34.735  		--rc geninfo_all_blocks=1
00:27:34.735  		--rc geninfo_unexecuted_blocks=1
00:27:34.735  		
00:27:34.735  		'
00:27:34.735   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:34.735    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:34.735     04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:34.735      04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:34.735      04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:34.736      04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:34.736      04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH
00:27:34.736      04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:34.736    04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable
00:27:34.736   04:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=()
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=()
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=()
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=()
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=()
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:27:36.639  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:27:36.639  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:27:36.639  Found net devices under 0000:0a:00.0: cvl_0_0
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:27:36.639  Found net devices under 0000:0a:00.1: cvl_0_1
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:36.639   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:36.640   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:36.640   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:36.640   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:36.640   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:36.898  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:36.898  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms
00:27:36.898  
00:27:36.898  --- 10.0.0.2 ping statistics ---
00:27:36.898  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:36.898  rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:36.898  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:36.898  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms
00:27:36.898  
00:27:36.898  --- 10.0.0.1 ping statistics ---
00:27:36.898  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:36.898  rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=363518
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 363518
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 363518 ']'
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:36.898  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:36.898   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:27:36.898  [2024-12-09 04:18:05.347940] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:27:36.898  [2024-12-09 04:18:05.349075] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:27:36.898  [2024-12-09 04:18:05.349141] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:36.898  [2024-12-09 04:18:05.421432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:27:37.155  [2024-12-09 04:18:05.478738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:37.155  [2024-12-09 04:18:05.478789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:37.155  [2024-12-09 04:18:05.478818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:37.155  [2024-12-09 04:18:05.478830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:37.155  [2024-12-09 04:18:05.478840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:37.155  [2024-12-09 04:18:05.480355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:37.155  [2024-12-09 04:18:05.480384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:27:37.155  [2024-12-09 04:18:05.480388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:37.155  [2024-12-09 04:18:05.571183] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:27:37.155  [2024-12-09 04:18:05.571432] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:27:37.155  [2024-12-09 04:18:05.571445] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:27:37.155  [2024-12-09 04:18:05.571684] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:27:37.155   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:37.155   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0
00:27:37.155   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:37.155   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:37.155   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:27:37.155   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:37.155   04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:27:37.413  [2024-12-09 04:18:05.897112] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:37.413    04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:27:37.671   04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:27:37.929    04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:27:38.208   04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:27:38.208   04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:27:38.467    04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:27:38.725   04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bc8f67ab-29f5-45f8-af02-ef12d1152f65
00:27:38.725    04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc8f67ab-29f5-45f8-af02-ef12d1152f65 lvol 20
00:27:38.983   04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=be0fb420-42d5-42e8-a577-c68a93506254
00:27:38.983   04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:27:39.241   04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be0fb420-42d5-42e8-a577-c68a93506254
00:27:39.499   04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:27:39.757  [2024-12-09 04:18:08.217320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:39.757   04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:27:40.015   04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=364341
00:27:40.015   04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1
00:27:40.015   04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:27:40.949    04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot be0fb420-42d5-42e8-a577-c68a93506254 MY_SNAPSHOT
00:27:41.515   04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=be02b3a3-43d8-48f4-ac3c-62722469d08b
00:27:41.515   04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize be0fb420-42d5-42e8-a577-c68a93506254 30
00:27:41.773    04:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone be02b3a3-43d8-48f4-ac3c-62722469d08b MY_CLONE
00:27:42.030   04:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=85099081-b5f5-4231-bbad-2f96323eb9df
00:27:42.031   04:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 85099081-b5f5-4231-bbad-2f96323eb9df
00:27:42.596   04:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 364341
00:27:50.752  Initializing NVMe Controllers
00:27:50.752  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:27:50.752  Controller IO queue size 128, less than required.
00:27:50.752  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:50.752  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:27:50.752  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:27:50.752  Initialization complete. Launching workers.
00:27:50.752  ========================================================
00:27:50.752                                                                                                               Latency(us)
00:27:50.752  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:50.752  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:   10523.10      41.11   12167.47    2859.59   53416.14
00:27:50.752  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:   10415.70      40.69   12293.81    5135.08   48878.00
00:27:50.752  ========================================================
00:27:50.752  Total                                                                    :   20938.80      81.79   12230.32    2859.59   53416.14
00:27:50.752  
00:27:50.752   04:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:27:50.752   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be0fb420-42d5-42e8-a577-c68a93506254
00:27:51.010   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc8f67ab-29f5-45f8-af02-ef12d1152f65
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:51.267  rmmod nvme_tcp
00:27:51.267  rmmod nvme_fabrics
00:27:51.267  rmmod nvme_keyring
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:51.267   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 363518 ']'
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 363518
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 363518 ']'
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 363518
00:27:51.525    04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:51.525    04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363518
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363518'
00:27:51.525  killing process with pid 363518
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 363518
00:27:51.525   04:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 363518
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:51.784   04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:51.784    04:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:53.684   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:53.684  
00:27:53.684  real	0m19.425s
00:27:53.684  user	0m56.578s
00:27:53.684  sys	0m7.897s
00:27:53.684   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:53.684   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:27:53.684  ************************************
00:27:53.684  END TEST nvmf_lvol
00:27:53.684  ************************************
00:27:53.684   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode
00:27:53.684   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:27:53.684   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:53.684   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:27:53.943  ************************************
00:27:53.944  START TEST nvmf_lvs_grow
00:27:53.944  ************************************
00:27:53.944   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode
00:27:53.944  * Looking for test storage...
00:27:53.944  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-:
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-:
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<'
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:53.944  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:53.944  		--rc genhtml_branch_coverage=1
00:27:53.944  		--rc genhtml_function_coverage=1
00:27:53.944  		--rc genhtml_legend=1
00:27:53.944  		--rc geninfo_all_blocks=1
00:27:53.944  		--rc geninfo_unexecuted_blocks=1
00:27:53.944  		
00:27:53.944  		'
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:53.944  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:53.944  		--rc genhtml_branch_coverage=1
00:27:53.944  		--rc genhtml_function_coverage=1
00:27:53.944  		--rc genhtml_legend=1
00:27:53.944  		--rc geninfo_all_blocks=1
00:27:53.944  		--rc geninfo_unexecuted_blocks=1
00:27:53.944  		
00:27:53.944  		'
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:53.944  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:53.944  		--rc genhtml_branch_coverage=1
00:27:53.944  		--rc genhtml_function_coverage=1
00:27:53.944  		--rc genhtml_legend=1
00:27:53.944  		--rc geninfo_all_blocks=1
00:27:53.944  		--rc geninfo_unexecuted_blocks=1
00:27:53.944  		
00:27:53.944  		'
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:53.944  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:53.944  		--rc genhtml_branch_coverage=1
00:27:53.944  		--rc genhtml_function_coverage=1
00:27:53.944  		--rc genhtml_legend=1
00:27:53.944  		--rc geninfo_all_blocks=1
00:27:53.944  		--rc geninfo_unexecuted_blocks=1
00:27:53.944  		
00:27:53.944  		'
00:27:53.944   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:53.944     04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:53.944      04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:53.944      04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:53.944      04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:53.944      04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH
00:27:53.944      04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:53.944    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:53.945    04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable
00:27:53.945   04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=()
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=()
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=()
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=()
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=()
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:27:56.477  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:27:56.477  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:27:56.477  Found net devices under 0000:0a:00.0: cvl_0_0
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:27:56.477  Found net devices under 0000:0a:00.1: cvl_0_1
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:56.477   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:56.478  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:56.478  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms
00:27:56.478  
00:27:56.478  --- 10.0.0.2 ping statistics ---
00:27:56.478  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:56.478  rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:56.478  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:56.478  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms
00:27:56.478  
00:27:56.478  --- 10.0.0.1 ping statistics ---
00:27:56.478  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:56.478  rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=367727
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 367727
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 367727 ']'
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:56.478  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:56.478   04:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:27:56.478  [2024-12-09 04:18:24.825875] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:27:56.478  [2024-12-09 04:18:24.827016] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:27:56.478  [2024-12-09 04:18:24.827075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:56.478  [2024-12-09 04:18:24.903135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:56.478  [2024-12-09 04:18:24.963015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:56.478  [2024-12-09 04:18:24.963081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:56.478  [2024-12-09 04:18:24.963110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:56.478  [2024-12-09 04:18:24.963121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:56.478  [2024-12-09 04:18:24.963132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:56.478  [2024-12-09 04:18:24.963799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:56.735  [2024-12-09 04:18:25.065923] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:27:56.735  [2024-12-09 04:18:25.066222] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:27:56.735   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:56.735   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0
00:27:56.735   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:56.735   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:56.735   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:27:56.735   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:56.735   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:27:56.993  [2024-12-09 04:18:25.360444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:27:56.993  ************************************
00:27:56.993  START TEST lvs_grow_clean
00:27:56.993  ************************************
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:27:56.993   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:27:56.993    04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:27:57.250   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:27:57.250    04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:27:57.507   04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc5bd54b-e670-4779-917b-3300a20c165b
00:27:57.507    04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:27:57.507    04:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:27:57.765   04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:27:57.765   04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:27:57.765    04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc5bd54b-e670-4779-917b-3300a20c165b lvol 150
00:27:58.022   04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1ca86154-1301-4a03-89a6-516d4d571f11
00:27:58.022   04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:27:58.022   04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:27:58.280  [2024-12-09 04:18:26.788353] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:27:58.280  [2024-12-09 04:18:26.788469] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:27:58.280  true
00:27:58.280    04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:27:58.280    04:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:27:58.538   04:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:27:58.538   04:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:27:58.796   04:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1ca86154-1301-4a03-89a6-516d4d571f11
00:27:59.361   04:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:27:59.361  [2024-12-09 04:18:27.884674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:59.361   04:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=368151
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 368151 /var/tmp/bdevperf.sock
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 368151 ']'
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:27:59.619  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:59.619   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:27:59.877  [2024-12-09 04:18:28.209927] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:27:59.877  [2024-12-09 04:18:28.210033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368151 ]
00:27:59.877  [2024-12-09 04:18:28.279012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:59.877  [2024-12-09 04:18:28.341079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:00.134   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:00.134   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0
00:28:00.134   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:28:00.392  Nvme0n1
00:28:00.392   04:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:28:00.650  [
00:28:00.650    {
00:28:00.650      "name": "Nvme0n1",
00:28:00.650      "aliases": [
00:28:00.650        "1ca86154-1301-4a03-89a6-516d4d571f11"
00:28:00.650      ],
00:28:00.650      "product_name": "NVMe disk",
00:28:00.650      "block_size": 4096,
00:28:00.650      "num_blocks": 38912,
00:28:00.650      "uuid": "1ca86154-1301-4a03-89a6-516d4d571f11",
00:28:00.650      "numa_id": 0,
00:28:00.650      "assigned_rate_limits": {
00:28:00.650        "rw_ios_per_sec": 0,
00:28:00.650        "rw_mbytes_per_sec": 0,
00:28:00.650        "r_mbytes_per_sec": 0,
00:28:00.650        "w_mbytes_per_sec": 0
00:28:00.650      },
00:28:00.650      "claimed": false,
00:28:00.650      "zoned": false,
00:28:00.650      "supported_io_types": {
00:28:00.650        "read": true,
00:28:00.650        "write": true,
00:28:00.650        "unmap": true,
00:28:00.650        "flush": true,
00:28:00.650        "reset": true,
00:28:00.650        "nvme_admin": true,
00:28:00.650        "nvme_io": true,
00:28:00.650        "nvme_io_md": false,
00:28:00.650        "write_zeroes": true,
00:28:00.650        "zcopy": false,
00:28:00.650        "get_zone_info": false,
00:28:00.650        "zone_management": false,
00:28:00.650        "zone_append": false,
00:28:00.650        "compare": true,
00:28:00.650        "compare_and_write": true,
00:28:00.650        "abort": true,
00:28:00.650        "seek_hole": false,
00:28:00.650        "seek_data": false,
00:28:00.650        "copy": true,
00:28:00.650        "nvme_iov_md": false
00:28:00.650      },
00:28:00.650      "memory_domains": [
00:28:00.650        {
00:28:00.650          "dma_device_id": "system",
00:28:00.650          "dma_device_type": 1
00:28:00.650        }
00:28:00.650      ],
00:28:00.650      "driver_specific": {
00:28:00.650        "nvme": [
00:28:00.650          {
00:28:00.650            "trid": {
00:28:00.650              "trtype": "TCP",
00:28:00.650              "adrfam": "IPv4",
00:28:00.650              "traddr": "10.0.0.2",
00:28:00.650              "trsvcid": "4420",
00:28:00.650              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:28:00.650            },
00:28:00.650            "ctrlr_data": {
00:28:00.650              "cntlid": 1,
00:28:00.650              "vendor_id": "0x8086",
00:28:00.650              "model_number": "SPDK bdev Controller",
00:28:00.650              "serial_number": "SPDK0",
00:28:00.650              "firmware_revision": "25.01",
00:28:00.650              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:28:00.650              "oacs": {
00:28:00.650                "security": 0,
00:28:00.650                "format": 0,
00:28:00.650                "firmware": 0,
00:28:00.650                "ns_manage": 0
00:28:00.650              },
00:28:00.650              "multi_ctrlr": true,
00:28:00.650              "ana_reporting": false
00:28:00.650            },
00:28:00.650            "vs": {
00:28:00.650              "nvme_version": "1.3"
00:28:00.650            },
00:28:00.650            "ns_data": {
00:28:00.650              "id": 1,
00:28:00.650              "can_share": true
00:28:00.650            }
00:28:00.650          }
00:28:00.650        ],
00:28:00.650        "mp_policy": "active_passive"
00:28:00.650      }
00:28:00.650    }
00:28:00.650  ]
00:28:00.650   04:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=368273
00:28:00.650   04:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:28:00.650   04:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:28:00.908  Running I/O for 10 seconds...
00:28:01.843                                                                                                  Latency(us)
00:28:01.843  
[2024-12-09T03:18:30.419Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:01.843  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:01.843  	 Nvme0n1             :       1.00   13285.00      51.89       0.00     0.00       0.00       0.00       0.00
00:28:01.843  
[2024-12-09T03:18:30.419Z]  ===================================================================================================================
00:28:01.843  
[2024-12-09T03:18:30.419Z]  Total                       :              13285.00      51.89       0.00     0.00       0.00       0.00       0.00
00:28:01.843  
00:28:02.776   04:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:02.776  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:02.776  	 Nvme0n1             :       2.00   13402.50      52.35       0.00     0.00       0.00       0.00       0.00
00:28:02.776  
[2024-12-09T03:18:31.352Z]  ===================================================================================================================
00:28:02.776  
[2024-12-09T03:18:31.352Z]  Total                       :              13402.50      52.35       0.00     0.00       0.00       0.00       0.00
00:28:02.776  
00:28:03.033  true
00:28:03.034    04:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:03.034    04:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:28:03.291   04:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:28:03.291   04:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:28:03.291   04:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 368273
00:28:03.857  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:03.857  	 Nvme0n1             :       3.00   13473.67      52.63       0.00     0.00       0.00       0.00       0.00
00:28:03.857  
[2024-12-09T03:18:32.433Z]  ===================================================================================================================
00:28:03.857  
[2024-12-09T03:18:32.433Z]  Total                       :              13473.67      52.63       0.00     0.00       0.00       0.00       0.00
00:28:03.857  
00:28:04.791  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:04.791  	 Nvme0n1             :       4.00   13553.25      52.94       0.00     0.00       0.00       0.00       0.00
00:28:04.791  
[2024-12-09T03:18:33.367Z]  ===================================================================================================================
00:28:04.791  
[2024-12-09T03:18:33.367Z]  Total                       :              13553.25      52.94       0.00     0.00       0.00       0.00       0.00
00:28:04.791  
00:28:05.725  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:05.725  	 Nvme0n1             :       5.00   13613.80      53.18       0.00     0.00       0.00       0.00       0.00
00:28:05.725  
[2024-12-09T03:18:34.301Z]  ===================================================================================================================
00:28:05.725  
[2024-12-09T03:18:34.301Z]  Total                       :              13613.80      53.18       0.00     0.00       0.00       0.00       0.00
00:28:05.725  
00:28:07.109  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:07.109  	 Nvme0n1             :       6.00   13654.17      53.34       0.00     0.00       0.00       0.00       0.00
00:28:07.109  
[2024-12-09T03:18:35.685Z]  ===================================================================================================================
00:28:07.109  
[2024-12-09T03:18:35.685Z]  Total                       :              13654.17      53.34       0.00     0.00       0.00       0.00       0.00
00:28:07.109  
00:28:08.041  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:08.041  	 Nvme0n1             :       7.00   13689.86      53.48       0.00     0.00       0.00       0.00       0.00
00:28:08.041  
[2024-12-09T03:18:36.617Z]  ===================================================================================================================
00:28:08.041  
[2024-12-09T03:18:36.618Z]  Total                       :              13689.86      53.48       0.00     0.00       0.00       0.00       0.00
00:28:08.042  
00:28:08.973  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:08.973  	 Nvme0n1             :       8.00   13716.62      53.58       0.00     0.00       0.00       0.00       0.00
00:28:08.973  
[2024-12-09T03:18:37.549Z]  ===================================================================================================================
00:28:08.973  
[2024-12-09T03:18:37.549Z]  Total                       :              13716.62      53.58       0.00     0.00       0.00       0.00       0.00
00:28:08.973  
00:28:09.912  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:09.912  	 Nvme0n1             :       9.00   13741.00      53.68       0.00     0.00       0.00       0.00       0.00
00:28:09.912  
[2024-12-09T03:18:38.488Z]  ===================================================================================================================
00:28:09.912  
[2024-12-09T03:18:38.488Z]  Total                       :              13741.00      53.68       0.00     0.00       0.00       0.00       0.00
00:28:09.912  
00:28:10.842  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:10.842  	 Nvme0n1             :      10.00   13766.90      53.78       0.00     0.00       0.00       0.00       0.00
00:28:10.842  
[2024-12-09T03:18:39.418Z]  ===================================================================================================================
00:28:10.842  
[2024-12-09T03:18:39.418Z]  Total                       :              13766.90      53.78       0.00     0.00       0.00       0.00       0.00
00:28:10.842  
00:28:10.842  
00:28:10.842                                                                                                  Latency(us)
00:28:10.842  
[2024-12-09T03:18:39.418Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:10.842  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:10.842  	 Nvme0n1             :      10.01   13767.32      53.78       0.00     0.00    9289.15    2742.80   12039.21
00:28:10.842  
[2024-12-09T03:18:39.418Z]  ===================================================================================================================
00:28:10.842  
[2024-12-09T03:18:39.418Z]  Total                       :              13767.32      53.78       0.00     0.00    9289.15    2742.80   12039.21
00:28:10.842  {
00:28:10.842    "results": [
00:28:10.842      {
00:28:10.842        "job": "Nvme0n1",
00:28:10.842        "core_mask": "0x2",
00:28:10.842        "workload": "randwrite",
00:28:10.842        "status": "finished",
00:28:10.842        "queue_depth": 128,
00:28:10.842        "io_size": 4096,
00:28:10.842        "runtime": 10.008991,
00:28:10.842        "iops": 13767.32180096875,
00:28:10.842        "mibps": 53.77860078503418,
00:28:10.842        "io_failed": 0,
00:28:10.842        "io_timeout": 0,
00:28:10.842        "avg_latency_us": 9289.148461620542,
00:28:10.842        "min_latency_us": 2742.8029629629627,
00:28:10.842        "max_latency_us": 12039.205925925926
00:28:10.842      }
00:28:10.842    ],
00:28:10.842    "core_count": 1
00:28:10.842  }
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 368151
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 368151 ']'
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 368151
00:28:10.842    04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:10.842    04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 368151
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 368151'
00:28:10.842  killing process with pid 368151
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 368151
00:28:10.842  Received shutdown signal, test time was about 10.000000 seconds
00:28:10.842  
00:28:10.842                                                                                                  Latency(us)
00:28:10.842  
[2024-12-09T03:18:39.418Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:10.842  
[2024-12-09T03:18:39.418Z]  ===================================================================================================================
00:28:10.842  
[2024-12-09T03:18:39.418Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:10.842   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 368151
00:28:11.099   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:28:11.357   04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:28:11.925    04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:11.925    04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:28:12.183   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:28:12.183   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]]
00:28:12.183   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:28:12.441  [2024-12-09 04:18:40.772397] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:12.441    04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:12.441    04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:28:12.441   04:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:12.699  request:
00:28:12.699  {
00:28:12.699    "uuid": "dc5bd54b-e670-4779-917b-3300a20c165b",
00:28:12.699    "method": "bdev_lvol_get_lvstores",
00:28:12.699    "req_id": 1
00:28:12.699  }
00:28:12.699  Got JSON-RPC error response
00:28:12.699  response:
00:28:12.699  {
00:28:12.699    "code": -19,
00:28:12.699    "message": "No such device"
00:28:12.699  }
00:28:12.699   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1
00:28:12.699   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:12.699   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:12.699   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:12.699   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:28:12.957  aio_bdev
00:28:12.957   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1ca86154-1301-4a03-89a6-516d4d571f11
00:28:12.957   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1ca86154-1301-4a03-89a6-516d4d571f11
00:28:12.957   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:28:12.957   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i
00:28:12.957   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:28:12.957   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:28:12.957   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:28:13.215   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1ca86154-1301-4a03-89a6-516d4d571f11 -t 2000
00:28:13.473  [
00:28:13.473    {
00:28:13.473      "name": "1ca86154-1301-4a03-89a6-516d4d571f11",
00:28:13.473      "aliases": [
00:28:13.473        "lvs/lvol"
00:28:13.473      ],
00:28:13.473      "product_name": "Logical Volume",
00:28:13.473      "block_size": 4096,
00:28:13.473      "num_blocks": 38912,
00:28:13.473      "uuid": "1ca86154-1301-4a03-89a6-516d4d571f11",
00:28:13.473      "assigned_rate_limits": {
00:28:13.473        "rw_ios_per_sec": 0,
00:28:13.473        "rw_mbytes_per_sec": 0,
00:28:13.473        "r_mbytes_per_sec": 0,
00:28:13.473        "w_mbytes_per_sec": 0
00:28:13.473      },
00:28:13.473      "claimed": false,
00:28:13.473      "zoned": false,
00:28:13.473      "supported_io_types": {
00:28:13.473        "read": true,
00:28:13.473        "write": true,
00:28:13.473        "unmap": true,
00:28:13.473        "flush": false,
00:28:13.473        "reset": true,
00:28:13.473        "nvme_admin": false,
00:28:13.473        "nvme_io": false,
00:28:13.473        "nvme_io_md": false,
00:28:13.473        "write_zeroes": true,
00:28:13.473        "zcopy": false,
00:28:13.473        "get_zone_info": false,
00:28:13.473        "zone_management": false,
00:28:13.473        "zone_append": false,
00:28:13.473        "compare": false,
00:28:13.473        "compare_and_write": false,
00:28:13.473        "abort": false,
00:28:13.473        "seek_hole": true,
00:28:13.473        "seek_data": true,
00:28:13.473        "copy": false,
00:28:13.473        "nvme_iov_md": false
00:28:13.473      },
00:28:13.473      "driver_specific": {
00:28:13.473        "lvol": {
00:28:13.473          "lvol_store_uuid": "dc5bd54b-e670-4779-917b-3300a20c165b",
00:28:13.473          "base_bdev": "aio_bdev",
00:28:13.473          "thin_provision": false,
00:28:13.473          "num_allocated_clusters": 38,
00:28:13.473          "snapshot": false,
00:28:13.473          "clone": false,
00:28:13.473          "esnap_clone": false
00:28:13.473        }
00:28:13.473      }
00:28:13.473    }
00:28:13.473  ]
00:28:13.473   04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0
00:28:13.473    04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:13.473    04:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:28:13.732   04:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:28:13.732    04:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:13.732    04:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:28:13.991   04:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:28:13.991   04:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1ca86154-1301-4a03-89a6-516d4d571f11
00:28:14.249   04:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc5bd54b-e670-4779-917b-3300a20c165b
00:28:14.508   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:28:14.766   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:28:14.766  
00:28:14.766  real	0m17.921s
00:28:14.766  user	0m16.457s
00:28:14.766  sys	0m2.301s
00:28:14.766   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:14.766   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:28:14.766  ************************************
00:28:14.766  END TEST lvs_grow_clean
00:28:14.766  ************************************
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:28:15.024  ************************************
00:28:15.024  START TEST lvs_grow_dirty
00:28:15.024  ************************************
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:28:15.024   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:28:15.024    04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:28:15.282   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:28:15.282    04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:28:15.541   04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:15.541    04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:15.541    04:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:28:15.800   04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:28:15.800   04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:28:15.800    04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12c45bb7-52fa-4020-b43e-d52224f12eab lvol 150
00:28:16.058   04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=82d66315-222c-4169-960c-019cc4141a5e
00:28:16.058   04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:28:16.058   04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:28:16.316  [2024-12-09 04:18:44.764346] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:28:16.316  [2024-12-09 04:18:44.764471] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:28:16.316  true
00:28:16.316    04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:16.316    04:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:28:16.574   04:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:28:16.574   04:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:28:16.832   04:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82d66315-222c-4169-960c-019cc4141a5e
00:28:17.090   04:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:28:17.347  [2024-12-09 04:18:45.880727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:28:17.347   04:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=370299
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 370299 /var/tmp/bdevperf.sock
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 370299 ']'
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:28:17.605  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:17.605   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:28:17.863  [2024-12-09 04:18:46.211826] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:17.863  [2024-12-09 04:18:46.211929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370299 ]
00:28:17.863  [2024-12-09 04:18:46.280875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:17.863  [2024-12-09 04:18:46.343674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:18.122   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:18.122   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:28:18.122   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:28:18.380  Nvme0n1
00:28:18.380   04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:28:18.639  [
00:28:18.639    {
00:28:18.639      "name": "Nvme0n1",
00:28:18.639      "aliases": [
00:28:18.639        "82d66315-222c-4169-960c-019cc4141a5e"
00:28:18.639      ],
00:28:18.639      "product_name": "NVMe disk",
00:28:18.639      "block_size": 4096,
00:28:18.639      "num_blocks": 38912,
00:28:18.639      "uuid": "82d66315-222c-4169-960c-019cc4141a5e",
00:28:18.639      "numa_id": 0,
00:28:18.639      "assigned_rate_limits": {
00:28:18.639        "rw_ios_per_sec": 0,
00:28:18.639        "rw_mbytes_per_sec": 0,
00:28:18.639        "r_mbytes_per_sec": 0,
00:28:18.639        "w_mbytes_per_sec": 0
00:28:18.639      },
00:28:18.639      "claimed": false,
00:28:18.639      "zoned": false,
00:28:18.639      "supported_io_types": {
00:28:18.639        "read": true,
00:28:18.639        "write": true,
00:28:18.639        "unmap": true,
00:28:18.639        "flush": true,
00:28:18.639        "reset": true,
00:28:18.639        "nvme_admin": true,
00:28:18.639        "nvme_io": true,
00:28:18.639        "nvme_io_md": false,
00:28:18.639        "write_zeroes": true,
00:28:18.639        "zcopy": false,
00:28:18.639        "get_zone_info": false,
00:28:18.639        "zone_management": false,
00:28:18.639        "zone_append": false,
00:28:18.639        "compare": true,
00:28:18.639        "compare_and_write": true,
00:28:18.639        "abort": true,
00:28:18.639        "seek_hole": false,
00:28:18.639        "seek_data": false,
00:28:18.639        "copy": true,
00:28:18.639        "nvme_iov_md": false
00:28:18.639      },
00:28:18.639      "memory_domains": [
00:28:18.639        {
00:28:18.639          "dma_device_id": "system",
00:28:18.639          "dma_device_type": 1
00:28:18.639        }
00:28:18.639      ],
00:28:18.639      "driver_specific": {
00:28:18.639        "nvme": [
00:28:18.639          {
00:28:18.639            "trid": {
00:28:18.639              "trtype": "TCP",
00:28:18.639              "adrfam": "IPv4",
00:28:18.639              "traddr": "10.0.0.2",
00:28:18.639              "trsvcid": "4420",
00:28:18.639              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:28:18.639            },
00:28:18.639            "ctrlr_data": {
00:28:18.639              "cntlid": 1,
00:28:18.639              "vendor_id": "0x8086",
00:28:18.639              "model_number": "SPDK bdev Controller",
00:28:18.639              "serial_number": "SPDK0",
00:28:18.639              "firmware_revision": "25.01",
00:28:18.639              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:28:18.639              "oacs": {
00:28:18.639                "security": 0,
00:28:18.639                "format": 0,
00:28:18.639                "firmware": 0,
00:28:18.639                "ns_manage": 0
00:28:18.639              },
00:28:18.639              "multi_ctrlr": true,
00:28:18.639              "ana_reporting": false
00:28:18.639            },
00:28:18.639            "vs": {
00:28:18.639              "nvme_version": "1.3"
00:28:18.639            },
00:28:18.639            "ns_data": {
00:28:18.639              "id": 1,
00:28:18.639              "can_share": true
00:28:18.639            }
00:28:18.639          }
00:28:18.639        ],
00:28:18.639        "mp_policy": "active_passive"
00:28:18.639      }
00:28:18.639    }
00:28:18.639  ]
00:28:18.639   04:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=370432
00:28:18.639   04:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:28:18.639   04:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:28:18.897  Running I/O for 10 seconds...
00:28:19.830                                                                                                  Latency(us)
00:28:19.830  
[2024-12-09T03:18:48.406Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:19.830  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:19.830  	 Nvme0n1             :       1.00   14859.00      58.04       0.00     0.00       0.00       0.00       0.00
00:28:19.830  
[2024-12-09T03:18:48.406Z]  ===================================================================================================================
00:28:19.830  
[2024-12-09T03:18:48.406Z]  Total                       :              14859.00      58.04       0.00     0.00       0.00       0.00       0.00
00:28:19.830  
00:28:20.763   04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:20.763  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:20.763  	 Nvme0n1             :       2.00   14986.00      58.54       0.00     0.00       0.00       0.00       0.00
00:28:20.763  
[2024-12-09T03:18:49.339Z]  ===================================================================================================================
00:28:20.763  
[2024-12-09T03:18:49.339Z]  Total                       :              14986.00      58.54       0.00     0.00       0.00       0.00       0.00
00:28:20.763  
00:28:21.019  true
00:28:21.019    04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:21.019    04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:28:21.277   04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:28:21.277   04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:28:21.277   04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 370432
00:28:21.840  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:21.840  	 Nvme0n1             :       3.00   15007.33      58.62       0.00     0.00       0.00       0.00       0.00
00:28:21.840  
[2024-12-09T03:18:50.417Z]  ===================================================================================================================
00:28:21.841  
[2024-12-09T03:18:50.417Z]  Total                       :              15007.33      58.62       0.00     0.00       0.00       0.00       0.00
00:28:21.841  
00:28:22.772  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:22.772  	 Nvme0n1             :       4.00   15081.25      58.91       0.00     0.00       0.00       0.00       0.00
00:28:22.772  
[2024-12-09T03:18:51.348Z]  ===================================================================================================================
00:28:22.772  
[2024-12-09T03:18:51.348Z]  Total                       :              15081.25      58.91       0.00     0.00       0.00       0.00       0.00
00:28:22.772  
00:28:24.147  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:24.147  	 Nvme0n1             :       5.00   15163.80      59.23       0.00     0.00       0.00       0.00       0.00
00:28:24.147  
[2024-12-09T03:18:52.723Z]  ===================================================================================================================
00:28:24.147  
[2024-12-09T03:18:52.723Z]  Total                       :              15163.80      59.23       0.00     0.00       0.00       0.00       0.00
00:28:24.147  
00:28:25.083  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:25.083  	 Nvme0n1             :       6.00   15240.00      59.53       0.00     0.00       0.00       0.00       0.00
00:28:25.083  
[2024-12-09T03:18:53.659Z]  ===================================================================================================================
00:28:25.083  
[2024-12-09T03:18:53.659Z]  Total                       :              15240.00      59.53       0.00     0.00       0.00       0.00       0.00
00:28:25.083  
00:28:26.040  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:26.040  	 Nvme0n1             :       7.00   15285.43      59.71       0.00     0.00       0.00       0.00       0.00
00:28:26.040  
[2024-12-09T03:18:54.616Z]  ===================================================================================================================
00:28:26.040  
[2024-12-09T03:18:54.616Z]  Total                       :              15285.43      59.71       0.00     0.00       0.00       0.00       0.00
00:28:26.040  
00:28:26.981  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:26.981  	 Nvme0n1             :       8.00   15335.25      59.90       0.00     0.00       0.00       0.00       0.00
00:28:26.981  
[2024-12-09T03:18:55.557Z]  ===================================================================================================================
00:28:26.981  
[2024-12-09T03:18:55.557Z]  Total                       :              15335.25      59.90       0.00     0.00       0.00       0.00       0.00
00:28:26.981  
00:28:27.914  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:27.914  	 Nvme0n1             :       9.00   15356.67      59.99       0.00     0.00       0.00       0.00       0.00
00:28:27.914  
[2024-12-09T03:18:56.490Z]  ===================================================================================================================
00:28:27.914  
[2024-12-09T03:18:56.490Z]  Total                       :              15356.67      59.99       0.00     0.00       0.00       0.00       0.00
00:28:27.914  
00:28:28.844  
00:28:28.844                                                                                                  Latency(us)
00:28:28.844  
[2024-12-09T03:18:57.420Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:28.844  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:28.844  	 Nvme0n1             :      10.00   15389.67      60.12       0.00     0.00    8312.23    4320.52   18252.99
00:28:28.844  
[2024-12-09T03:18:57.420Z]  ===================================================================================================================
00:28:28.844  
[2024-12-09T03:18:57.420Z]  Total                       :              15389.67      60.12       0.00     0.00    8312.23    4320.52   18252.99
00:28:28.844  {
00:28:28.844    "results": [
00:28:28.844      {
00:28:28.844        "job": "Nvme0n1",
00:28:28.844        "core_mask": "0x2",
00:28:28.844        "workload": "randwrite",
00:28:28.844        "status": "finished",
00:28:28.844        "queue_depth": 128,
00:28:28.844        "io_size": 4096,
00:28:28.844        "runtime": 10.00405,
00:28:28.844        "iops": 15389.66718479016,
00:28:28.844        "mibps": 60.11588744058656,
00:28:28.844        "io_failed": 0,
00:28:28.844        "io_timeout": 0,
00:28:28.844        "avg_latency_us": 8312.225158088024,
00:28:28.844        "min_latency_us": 4320.521481481482,
00:28:28.844        "max_latency_us": 18252.98962962963
00:28:28.844      }
00:28:28.844    ],
00:28:28.844    "core_count": 1
00:28:28.844  }
00:28:28.844   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 370299
00:28:28.844   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 370299 ']'
00:28:28.844   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 370299
00:28:28.844    04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname
00:28:28.844   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:28.844    04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370299
00:28:28.844   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:28.844   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:28.844   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370299'
00:28:28.844  killing process with pid 370299
00:28:28.845   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 370299
00:28:28.845  Received shutdown signal, test time was about 10.000000 seconds
00:28:28.845  
00:28:28.845                                                                                                  Latency(us)
00:28:28.845  
[2024-12-09T03:18:57.421Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:28.845  
[2024-12-09T03:18:57.421Z]  ===================================================================================================================
00:28:28.845  
[2024-12-09T03:18:57.421Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:28.845   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 370299
00:28:29.101   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:28:29.359   04:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:28:29.617    04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:29.617    04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]]
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 367727
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 367727
00:28:29.876  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 367727 Killed                  "${NVMF_APP[@]}" "$@"
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=371749
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 371749
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 371749 ']'
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:29.876  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:29.876   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:28:30.135  [2024-12-09 04:18:58.482225] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:28:30.135  [2024-12-09 04:18:58.483406] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:30.135  [2024-12-09 04:18:58.483489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:30.135  [2024-12-09 04:18:58.556923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:30.135  [2024-12-09 04:18:58.615169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:30.135  [2024-12-09 04:18:58.615244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:30.135  [2024-12-09 04:18:58.615278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:30.135  [2024-12-09 04:18:58.615293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:30.135  [2024-12-09 04:18:58.615303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:30.135  [2024-12-09 04:18:58.615930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:30.394  [2024-12-09 04:18:58.712876] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:28:30.394  [2024-12-09 04:18:58.713186] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:28:30.394   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:30.394   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:28:30.394   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:30.394   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:30.394   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:28:30.394   04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:30.394    04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:28:30.652  [2024-12-09 04:18:59.026780] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:28:30.652  [2024-12-09 04:18:59.026908] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:28:30.652  [2024-12-09 04:18:59.026959] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:28:30.652   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev
00:28:30.652   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 82d66315-222c-4169-960c-019cc4141a5e
00:28:30.652   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=82d66315-222c-4169-960c-019cc4141a5e
00:28:30.653   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:28:30.653   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:28:30.653   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:28:30.653   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:28:30.653   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:28:30.911   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 82d66315-222c-4169-960c-019cc4141a5e -t 2000
00:28:31.169  [
00:28:31.169    {
00:28:31.169      "name": "82d66315-222c-4169-960c-019cc4141a5e",
00:28:31.169      "aliases": [
00:28:31.169        "lvs/lvol"
00:28:31.169      ],
00:28:31.169      "product_name": "Logical Volume",
00:28:31.169      "block_size": 4096,
00:28:31.169      "num_blocks": 38912,
00:28:31.169      "uuid": "82d66315-222c-4169-960c-019cc4141a5e",
00:28:31.169      "assigned_rate_limits": {
00:28:31.169        "rw_ios_per_sec": 0,
00:28:31.169        "rw_mbytes_per_sec": 0,
00:28:31.169        "r_mbytes_per_sec": 0,
00:28:31.169        "w_mbytes_per_sec": 0
00:28:31.169      },
00:28:31.169      "claimed": false,
00:28:31.169      "zoned": false,
00:28:31.169      "supported_io_types": {
00:28:31.169        "read": true,
00:28:31.169        "write": true,
00:28:31.169        "unmap": true,
00:28:31.169        "flush": false,
00:28:31.169        "reset": true,
00:28:31.169        "nvme_admin": false,
00:28:31.169        "nvme_io": false,
00:28:31.169        "nvme_io_md": false,
00:28:31.169        "write_zeroes": true,
00:28:31.169        "zcopy": false,
00:28:31.169        "get_zone_info": false,
00:28:31.169        "zone_management": false,
00:28:31.169        "zone_append": false,
00:28:31.169        "compare": false,
00:28:31.169        "compare_and_write": false,
00:28:31.169        "abort": false,
00:28:31.169        "seek_hole": true,
00:28:31.169        "seek_data": true,
00:28:31.169        "copy": false,
00:28:31.169        "nvme_iov_md": false
00:28:31.169      },
00:28:31.169      "driver_specific": {
00:28:31.169        "lvol": {
00:28:31.169          "lvol_store_uuid": "12c45bb7-52fa-4020-b43e-d52224f12eab",
00:28:31.169          "base_bdev": "aio_bdev",
00:28:31.169          "thin_provision": false,
00:28:31.169          "num_allocated_clusters": 38,
00:28:31.169          "snapshot": false,
00:28:31.169          "clone": false,
00:28:31.169          "esnap_clone": false
00:28:31.169        }
00:28:31.169      }
00:28:31.169    }
00:28:31.169  ]
00:28:31.169   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:28:31.169    04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:31.169    04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters'
00:28:31.428   04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 ))
00:28:31.428    04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:31.428    04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters'
00:28:31.685   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 ))
00:28:31.685   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:28:31.942  [2024-12-09 04:19:00.416516] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:31.943    04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:31.943    04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:28:31.943   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:32.199  request:
00:28:32.199  {
00:28:32.199    "uuid": "12c45bb7-52fa-4020-b43e-d52224f12eab",
00:28:32.199    "method": "bdev_lvol_get_lvstores",
00:28:32.199    "req_id": 1
00:28:32.199  }
00:28:32.199  Got JSON-RPC error response
00:28:32.199  response:
00:28:32.199  {
00:28:32.199    "code": -19,
00:28:32.199    "message": "No such device"
00:28:32.199  }
00:28:32.199   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1
00:28:32.199   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:32.199   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:32.200   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:32.200   04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:28:32.457  aio_bdev
00:28:32.457   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 82d66315-222c-4169-960c-019cc4141a5e
00:28:32.457   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=82d66315-222c-4169-960c-019cc4141a5e
00:28:32.457   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:28:32.457   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:28:32.457   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:28:32.457   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:28:32.457   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:28:33.022   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 82d66315-222c-4169-960c-019cc4141a5e -t 2000
00:28:33.022  [
00:28:33.022    {
00:28:33.022      "name": "82d66315-222c-4169-960c-019cc4141a5e",
00:28:33.022      "aliases": [
00:28:33.022        "lvs/lvol"
00:28:33.022      ],
00:28:33.022      "product_name": "Logical Volume",
00:28:33.022      "block_size": 4096,
00:28:33.022      "num_blocks": 38912,
00:28:33.022      "uuid": "82d66315-222c-4169-960c-019cc4141a5e",
00:28:33.022      "assigned_rate_limits": {
00:28:33.022        "rw_ios_per_sec": 0,
00:28:33.022        "rw_mbytes_per_sec": 0,
00:28:33.022        "r_mbytes_per_sec": 0,
00:28:33.022        "w_mbytes_per_sec": 0
00:28:33.022      },
00:28:33.022      "claimed": false,
00:28:33.022      "zoned": false,
00:28:33.022      "supported_io_types": {
00:28:33.022        "read": true,
00:28:33.022        "write": true,
00:28:33.022        "unmap": true,
00:28:33.022        "flush": false,
00:28:33.022        "reset": true,
00:28:33.022        "nvme_admin": false,
00:28:33.022        "nvme_io": false,
00:28:33.022        "nvme_io_md": false,
00:28:33.022        "write_zeroes": true,
00:28:33.022        "zcopy": false,
00:28:33.022        "get_zone_info": false,
00:28:33.022        "zone_management": false,
00:28:33.022        "zone_append": false,
00:28:33.022        "compare": false,
00:28:33.022        "compare_and_write": false,
00:28:33.022        "abort": false,
00:28:33.022        "seek_hole": true,
00:28:33.022        "seek_data": true,
00:28:33.022        "copy": false,
00:28:33.022        "nvme_iov_md": false
00:28:33.022      },
00:28:33.022      "driver_specific": {
00:28:33.022        "lvol": {
00:28:33.022          "lvol_store_uuid": "12c45bb7-52fa-4020-b43e-d52224f12eab",
00:28:33.022          "base_bdev": "aio_bdev",
00:28:33.022          "thin_provision": false,
00:28:33.022          "num_allocated_clusters": 38,
00:28:33.022          "snapshot": false,
00:28:33.022          "clone": false,
00:28:33.022          "esnap_clone": false
00:28:33.022        }
00:28:33.022      }
00:28:33.022    }
00:28:33.022  ]
00:28:33.022   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:28:33.022    04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:33.022    04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:28:33.278   04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:28:33.535    04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:33.535    04:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:28:33.792   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:28:33.792   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82d66315-222c-4169-960c-019cc4141a5e
00:28:34.049   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12c45bb7-52fa-4020-b43e-d52224f12eab
00:28:34.306   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:28:34.564  
00:28:34.564  real	0m19.582s
00:28:34.564  user	0m36.409s
00:28:34.564  sys	0m4.954s
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:28:34.564  ************************************
00:28:34.564  END TEST lvs_grow_dirty
00:28:34.564  ************************************
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:28:34.564    04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files
00:28:34.564   04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:28:34.564  nvmf_trace.0
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:34.564  rmmod nvme_tcp
00:28:34.564  rmmod nvme_fabrics
00:28:34.564  rmmod nvme_keyring
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 371749 ']'
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 371749
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 371749 ']'
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 371749
00:28:34.564    04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:34.564    04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371749
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371749'
00:28:34.564  killing process with pid 371749
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 371749
00:28:34.564   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 371749
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:34.822   04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:34.822    04:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:28:37.363  
00:28:37.363  real	0m43.111s
00:28:37.363  user	0m54.728s
00:28:37.363  sys	0m9.301s
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:28:37.363  ************************************
00:28:37.363  END TEST nvmf_lvs_grow
00:28:37.363  ************************************
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:28:37.363  ************************************
00:28:37.363  START TEST nvmf_bdev_io_wait
00:28:37.363  ************************************
00:28:37.363   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode
00:28:37.363  * Looking for test storage...
00:28:37.363  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-:
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-:
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<'
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:37.363     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:37.363    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:37.363  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.363  		--rc genhtml_branch_coverage=1
00:28:37.363  		--rc genhtml_function_coverage=1
00:28:37.363  		--rc genhtml_legend=1
00:28:37.363  		--rc geninfo_all_blocks=1
00:28:37.363  		--rc geninfo_unexecuted_blocks=1
00:28:37.363  		
00:28:37.363  		'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:37.364  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.364  		--rc genhtml_branch_coverage=1
00:28:37.364  		--rc genhtml_function_coverage=1
00:28:37.364  		--rc genhtml_legend=1
00:28:37.364  		--rc geninfo_all_blocks=1
00:28:37.364  		--rc geninfo_unexecuted_blocks=1
00:28:37.364  		
00:28:37.364  		'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:37.364  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.364  		--rc genhtml_branch_coverage=1
00:28:37.364  		--rc genhtml_function_coverage=1
00:28:37.364  		--rc genhtml_legend=1
00:28:37.364  		--rc geninfo_all_blocks=1
00:28:37.364  		--rc geninfo_unexecuted_blocks=1
00:28:37.364  		
00:28:37.364  		'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:37.364  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.364  		--rc genhtml_branch_coverage=1
00:28:37.364  		--rc genhtml_function_coverage=1
00:28:37.364  		--rc genhtml_legend=1
00:28:37.364  		--rc geninfo_all_blocks=1
00:28:37.364  		--rc geninfo_unexecuted_blocks=1
00:28:37.364  		
00:28:37.364  		'
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:28:37.364     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:37.364     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:28:37.364     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob
00:28:37.364     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:37.364     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:37.364     04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:37.364      04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.364      04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.364      04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.364      04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH
00:28:37.364      04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:37.364    04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable
00:28:37.364   04:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=()
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=()
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=()
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=()
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=()
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx
00:28:39.267   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:28:39.268  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:28:39.268  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:28:39.268  Found net devices under 0000:0a:00.0: cvl_0_0
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:28:39.268  Found net devices under 0000:0a:00.1: cvl_0_1
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:28:39.268  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:28:39.268  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms
00:28:39.268  
00:28:39.268  --- 10.0.0.2 ping statistics ---
00:28:39.268  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:39.268  rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:28:39.268  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:28:39.268  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms
00:28:39.268  
00:28:39.268  --- 10.0.0.1 ping statistics ---
00:28:39.268  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:39.268  rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:28:39.268   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=374278
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 374278
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 374278 ']'
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:39.269  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:39.269   04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.527  [2024-12-09 04:19:07.880379] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:28:39.527  [2024-12-09 04:19:07.881507] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:39.527  [2024-12-09 04:19:07.881562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:39.527  [2024-12-09 04:19:07.953406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:28:39.527  [2024-12-09 04:19:08.013009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:39.527  [2024-12-09 04:19:08.013077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:39.527  [2024-12-09 04:19:08.013101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:39.527  [2024-12-09 04:19:08.013111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:39.527  [2024-12-09 04:19:08.013121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:39.527  [2024-12-09 04:19:08.014726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:39.527  [2024-12-09 04:19:08.014785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:39.527  [2024-12-09 04:19:08.014782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:28:39.527  [2024-12-09 04:19:08.014756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:28:39.527  [2024-12-09 04:19:08.015306] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786  [2024-12-09 04:19:08.214061] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:28:39.786  [2024-12-09 04:19:08.214293] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:28:39.786  [2024-12-09 04:19:08.215198] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:28:39.786  [2024-12-09 04:19:08.216036] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786  [2024-12-09 04:19:08.223547] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786  Malloc0
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:39.786  [2024-12-09 04:19:08.279705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=374405
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=374408
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:39.786  {
00:28:39.786    "params": {
00:28:39.786      "name": "Nvme$subsystem",
00:28:39.786      "trtype": "$TEST_TRANSPORT",
00:28:39.786      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:39.786      "adrfam": "ipv4",
00:28:39.786      "trsvcid": "$NVMF_PORT",
00:28:39.786      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:39.786      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:39.786      "hdgst": ${hdgst:-false},
00:28:39.786      "ddgst": ${ddgst:-false}
00:28:39.786    },
00:28:39.786    "method": "bdev_nvme_attach_controller"
00:28:39.786  }
00:28:39.786  EOF
00:28:39.786  )")
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=374410
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:39.786  {
00:28:39.786    "params": {
00:28:39.786      "name": "Nvme$subsystem",
00:28:39.786      "trtype": "$TEST_TRANSPORT",
00:28:39.786      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:39.786      "adrfam": "ipv4",
00:28:39.786      "trsvcid": "$NVMF_PORT",
00:28:39.786      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:39.786      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:39.786      "hdgst": ${hdgst:-false},
00:28:39.786      "ddgst": ${ddgst:-false}
00:28:39.786    },
00:28:39.786    "method": "bdev_nvme_attach_controller"
00:28:39.786  }
00:28:39.786  EOF
00:28:39.786  )")
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=374413
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:28:39.786     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:39.786  {
00:28:39.786    "params": {
00:28:39.786      "name": "Nvme$subsystem",
00:28:39.786      "trtype": "$TEST_TRANSPORT",
00:28:39.786      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:39.786      "adrfam": "ipv4",
00:28:39.786      "trsvcid": "$NVMF_PORT",
00:28:39.786      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:39.786      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:39.786      "hdgst": ${hdgst:-false},
00:28:39.786      "ddgst": ${ddgst:-false}
00:28:39.786    },
00:28:39.786    "method": "bdev_nvme_attach_controller"
00:28:39.786  }
00:28:39.786  EOF
00:28:39.786  )")
00:28:39.786   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:28:39.786    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:28:39.786     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:39.787  {
00:28:39.787    "params": {
00:28:39.787      "name": "Nvme$subsystem",
00:28:39.787      "trtype": "$TEST_TRANSPORT",
00:28:39.787      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:39.787      "adrfam": "ipv4",
00:28:39.787      "trsvcid": "$NVMF_PORT",
00:28:39.787      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:39.787      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:39.787      "hdgst": ${hdgst:-false},
00:28:39.787      "ddgst": ${ddgst:-false}
00:28:39.787    },
00:28:39.787    "method": "bdev_nvme_attach_controller"
00:28:39.787  }
00:28:39.787  EOF
00:28:39.787  )")
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:28:39.787   04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 374405
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:28:39.787    "params": {
00:28:39.787      "name": "Nvme1",
00:28:39.787      "trtype": "tcp",
00:28:39.787      "traddr": "10.0.0.2",
00:28:39.787      "adrfam": "ipv4",
00:28:39.787      "trsvcid": "4420",
00:28:39.787      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:28:39.787      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:28:39.787      "hdgst": false,
00:28:39.787      "ddgst": false
00:28:39.787    },
00:28:39.787    "method": "bdev_nvme_attach_controller"
00:28:39.787  }'
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:28:39.787    "params": {
00:28:39.787      "name": "Nvme1",
00:28:39.787      "trtype": "tcp",
00:28:39.787      "traddr": "10.0.0.2",
00:28:39.787      "adrfam": "ipv4",
00:28:39.787      "trsvcid": "4420",
00:28:39.787      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:28:39.787      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:28:39.787      "hdgst": false,
00:28:39.787      "ddgst": false
00:28:39.787    },
00:28:39.787    "method": "bdev_nvme_attach_controller"
00:28:39.787  }'
00:28:39.787    04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:28:39.787    "params": {
00:28:39.787      "name": "Nvme1",
00:28:39.787      "trtype": "tcp",
00:28:39.787      "traddr": "10.0.0.2",
00:28:39.787      "adrfam": "ipv4",
00:28:39.787      "trsvcid": "4420",
00:28:39.787      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:28:39.787      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:28:39.787      "hdgst": false,
00:28:39.787      "ddgst": false
00:28:39.787    },
00:28:39.787    "method": "bdev_nvme_attach_controller"
00:28:39.787  }'
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:28:39.787     04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:28:39.787    "params": {
00:28:39.787      "name": "Nvme1",
00:28:39.787      "trtype": "tcp",
00:28:39.787      "traddr": "10.0.0.2",
00:28:39.787      "adrfam": "ipv4",
00:28:39.787      "trsvcid": "4420",
00:28:39.787      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:28:39.787      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:28:39.787      "hdgst": false,
00:28:39.787      "ddgst": false
00:28:39.787    },
00:28:39.787    "method": "bdev_nvme_attach_controller"
00:28:39.787  }'
00:28:39.787  [2024-12-09 04:19:08.330628] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:39.787  [2024-12-09 04:19:08.330625] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:39.787  [2024-12-09 04:19:08.330678] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:39.787  [2024-12-09 04:19:08.330678] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:39.787  [2024-12-09 04:19:08.330705] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 04:19:08.330705] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:28:39.787  --proc-type=auto ]
00:28:39.787  [2024-12-09 04:19:08.330753] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ]
00:28:39.787  [2024-12-09 04:19:08.330757] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:28:40.045  [2024-12-09 04:19:08.512892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:40.045  [2024-12-09 04:19:08.566312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:28:40.045  [2024-12-09 04:19:08.613640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:40.303  [2024-12-09 04:19:08.670321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:28:40.303  [2024-12-09 04:19:08.687186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:40.303  [2024-12-09 04:19:08.736340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:28:40.303  [2024-12-09 04:19:08.760686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:40.303  [2024-12-09 04:19:08.809830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:28:40.561  Running I/O for 1 seconds...
00:28:40.561  Running I/O for 1 seconds...
00:28:40.561  Running I/O for 1 seconds...
00:28:40.561  Running I/O for 1 seconds...
00:28:41.497      10439.00 IOPS,    40.78 MiB/s
00:28:41.497                                                                                                  Latency(us)
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:41.497  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:28:41.497  	 Nvme1n1             :       1.01   10483.17      40.95       0.00     0.00   12159.16    4271.98   13786.83
00:28:41.497  
[2024-12-09T03:19:10.073Z]  ===================================================================================================================
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Total                       :              10483.17      40.95       0.00     0.00   12159.16    4271.98   13786.83
00:28:41.497       8630.00 IOPS,    33.71 MiB/s
[2024-12-09T03:19:10.073Z]      9198.00 IOPS,    35.93 MiB/s
00:28:41.497                                                                                                  Latency(us)
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:41.497  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:28:41.497  	 Nvme1n1             :       1.01    8701.32      33.99       0.00     0.00   14646.16    2002.49   20486.07
00:28:41.497  
[2024-12-09T03:19:10.073Z]  ===================================================================================================================
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Total                       :               8701.32      33.99       0.00     0.00   14646.16    2002.49   20486.07
00:28:41.497  
00:28:41.497                                                                                                  Latency(us)
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:41.497  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:28:41.497  	 Nvme1n1             :       1.01    9277.19      36.24       0.00     0.00   13751.38    2451.53   19126.80
00:28:41.497  
[2024-12-09T03:19:10.073Z]  ===================================================================================================================
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Total                       :               9277.19      36.24       0.00     0.00   13751.38    2451.53   19126.80
00:28:41.497     142816.00 IOPS,   557.88 MiB/s
00:28:41.497                                                                                                  Latency(us)
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:41.497  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:28:41.497  	 Nvme1n1             :       1.00  142548.31     556.83       0.00     0.00     893.03     292.79    1856.85
00:28:41.497  
[2024-12-09T03:19:10.073Z]  ===================================================================================================================
00:28:41.497  
[2024-12-09T03:19:10.073Z]  Total                       :             142548.31     556.83       0.00     0.00     893.03     292.79    1856.85
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 374408
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 374410
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 374413
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:41.755  rmmod nvme_tcp
00:28:41.755  rmmod nvme_fabrics
00:28:41.755  rmmod nvme_keyring
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 374278 ']'
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 374278
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 374278 ']'
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 374278
00:28:41.755    04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:41.755    04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374278
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374278'
00:28:41.755  killing process with pid 374278
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 374278
00:28:41.755   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 374278
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:42.014   04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:42.014    04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:44.542   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:28:44.542  
00:28:44.542  real	0m7.097s
00:28:44.542  user	0m13.659s
00:28:44.542  sys	0m4.032s
00:28:44.542   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:44.542   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:28:44.542  ************************************
00:28:44.542  END TEST nvmf_bdev_io_wait
00:28:44.542  ************************************
00:28:44.543   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode
00:28:44.543   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:28:44.543   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:44.543   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:28:44.543  ************************************
00:28:44.543  START TEST nvmf_queue_depth
00:28:44.543  ************************************
00:28:44.543   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode
00:28:44.543  * Looking for test storage...
00:28:44.543  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-:
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-:
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<'
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:44.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:44.543  		--rc genhtml_branch_coverage=1
00:28:44.543  		--rc genhtml_function_coverage=1
00:28:44.543  		--rc genhtml_legend=1
00:28:44.543  		--rc geninfo_all_blocks=1
00:28:44.543  		--rc geninfo_unexecuted_blocks=1
00:28:44.543  		
00:28:44.543  		'
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:44.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:44.543  		--rc genhtml_branch_coverage=1
00:28:44.543  		--rc genhtml_function_coverage=1
00:28:44.543  		--rc genhtml_legend=1
00:28:44.543  		--rc geninfo_all_blocks=1
00:28:44.543  		--rc geninfo_unexecuted_blocks=1
00:28:44.543  		
00:28:44.543  		'
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:44.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:44.543  		--rc genhtml_branch_coverage=1
00:28:44.543  		--rc genhtml_function_coverage=1
00:28:44.543  		--rc genhtml_legend=1
00:28:44.543  		--rc geninfo_all_blocks=1
00:28:44.543  		--rc geninfo_unexecuted_blocks=1
00:28:44.543  		
00:28:44.543  		'
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:44.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:44.543  		--rc genhtml_branch_coverage=1
00:28:44.543  		--rc genhtml_function_coverage=1
00:28:44.543  		--rc genhtml_legend=1
00:28:44.543  		--rc geninfo_all_blocks=1
00:28:44.543  		--rc geninfo_unexecuted_blocks=1
00:28:44.543  		
00:28:44.543  		'
00:28:44.543   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:44.543    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:44.543     04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:44.543      04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:44.543      04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:44.543      04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:44.543      04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH
00:28:44.544      04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:44.544    04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable
00:28:44.544   04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=()
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=()
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=()
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=()
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=()
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:28:46.440  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:28:46.440  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:28:46.440  Found net devices under 0000:0a:00.0: cvl_0_0
00:28:46.440   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:28:46.441  Found net devices under 0000:0a:00.1: cvl_0_1
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:28:46.441   04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:28:46.698  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:28:46.698  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms
00:28:46.698  
00:28:46.698  --- 10.0.0.2 ping statistics ---
00:28:46.698  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:46.698  rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:28:46.698  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:28:46.698  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms
00:28:46.698  
00:28:46.698  --- 10.0.0.1 ping statistics ---
00:28:46.698  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:46.698  rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=376648
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 376648
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 376648 ']'
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:46.698  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:46.698   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.698  [2024-12-09 04:19:15.171359] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:28:46.698  [2024-12-09 04:19:15.172494] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:46.698  [2024-12-09 04:19:15.172574] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:46.698  [2024-12-09 04:19:15.249011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:46.955  [2024-12-09 04:19:15.306321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:46.955  [2024-12-09 04:19:15.306377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:46.955  [2024-12-09 04:19:15.306407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:46.955  [2024-12-09 04:19:15.306418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:46.955  [2024-12-09 04:19:15.306428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:46.955  [2024-12-09 04:19:15.306976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:46.955  [2024-12-09 04:19:15.394406] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:28:46.955  [2024-12-09 04:19:15.394688] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:28:46.955   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:46.955   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:28:46.955   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:46.955   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.956  [2024-12-09 04:19:15.443529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.956  Malloc0
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:46.956  [2024-12-09 04:19:15.499677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=376669
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 376669 /var/tmp/bdevperf.sock
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 376669 ']'
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:28:46.956  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:46.956   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:47.213  [2024-12-09 04:19:15.545762] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:28:47.213  [2024-12-09 04:19:15.545837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376669 ]
00:28:47.213  [2024-12-09 04:19:15.611098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:47.213  [2024-12-09 04:19:15.667816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:47.213   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:47.213   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:28:47.213   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:28:47.213   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:47.213   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:28:47.471  NVMe0n1
00:28:47.471   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:47.471   04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:28:47.471  Running I/O for 10 seconds...
00:28:49.777       7946.00 IOPS,    31.04 MiB/s
[2024-12-09T03:19:19.286Z]      8192.00 IOPS,    32.00 MiB/s
[2024-12-09T03:19:20.218Z]      8192.00 IOPS,    32.00 MiB/s
[2024-12-09T03:19:21.155Z]      8192.00 IOPS,    32.00 MiB/s
[2024-12-09T03:19:22.095Z]      8194.20 IOPS,    32.01 MiB/s
[2024-12-09T03:19:23.027Z]      8195.00 IOPS,    32.01 MiB/s
[2024-12-09T03:19:24.398Z]      8221.00 IOPS,    32.11 MiB/s
[2024-12-09T03:19:25.337Z]      8225.62 IOPS,    32.13 MiB/s
[2024-12-09T03:19:26.270Z]      8237.89 IOPS,    32.18 MiB/s
[2024-12-09T03:19:26.270Z]      8276.30 IOPS,    32.33 MiB/s
00:28:57.694                                                                                                  Latency(us)
00:28:57.694  
[2024-12-09T03:19:26.270Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:57.694  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:28:57.694  	 Verification LBA range: start 0x0 length 0x4000
00:28:57.694  	 NVMe0n1             :      10.10    8290.11      32.38       0.00     0.00  122934.04   21651.15   72235.24
00:28:57.694  
[2024-12-09T03:19:26.270Z]  ===================================================================================================================
00:28:57.694  
[2024-12-09T03:19:26.270Z]  Total                       :               8290.11      32.38       0.00     0.00  122934.04   21651.15   72235.24
00:28:57.694  {
00:28:57.694    "results": [
00:28:57.694      {
00:28:57.694        "job": "NVMe0n1",
00:28:57.694        "core_mask": "0x1",
00:28:57.694        "workload": "verify",
00:28:57.694        "status": "finished",
00:28:57.694        "verify_range": {
00:28:57.694          "start": 0,
00:28:57.694          "length": 16384
00:28:57.694        },
00:28:57.694        "queue_depth": 1024,
00:28:57.694        "io_size": 4096,
00:28:57.694        "runtime": 10.099146,
00:28:57.694        "iops": 8290.106906069088,
00:28:57.694        "mibps": 32.383230101832375,
00:28:57.694        "io_failed": 0,
00:28:57.694        "io_timeout": 0,
00:28:57.694        "avg_latency_us": 122934.04328704755,
00:28:57.694        "min_latency_us": 21651.152592592593,
00:28:57.694        "max_latency_us": 72235.23555555556
00:28:57.694      }
00:28:57.694    ],
00:28:57.694    "core_count": 1
00:28:57.694  }
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 376669
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 376669 ']'
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 376669
00:28:57.694    04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:57.694    04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376669
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376669'
00:28:57.694  killing process with pid 376669
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 376669
00:28:57.694  Received shutdown signal, test time was about 10.000000 seconds
00:28:57.694  
00:28:57.694                                                                                                  Latency(us)
00:28:57.694  
[2024-12-09T03:19:26.270Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:57.694  
[2024-12-09T03:19:26.270Z]  ===================================================================================================================
00:28:57.694  
[2024-12-09T03:19:26.270Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:57.694   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 376669
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:57.952  rmmod nvme_tcp
00:28:57.952  rmmod nvme_fabrics
00:28:57.952  rmmod nvme_keyring
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 376648 ']'
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 376648
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 376648 ']'
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 376648
00:28:57.952    04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:57.952    04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376648
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376648'
00:28:57.952  killing process with pid 376648
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 376648
00:28:57.952   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 376648
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:58.211   04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:58.211    04:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:29:00.747  
00:29:00.747  real	0m16.170s
00:29:00.747  user	0m21.251s
00:29:00.747  sys	0m3.826s
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:29:00.747  ************************************
00:29:00.747  END TEST nvmf_queue_depth
00:29:00.747  ************************************
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:29:00.747  ************************************
00:29:00.747  START TEST nvmf_target_multipath
00:29:00.747  ************************************
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode
00:29:00.747  * Looking for test storage...
00:29:00.747  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:00.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:00.747  		--rc genhtml_branch_coverage=1
00:29:00.747  		--rc genhtml_function_coverage=1
00:29:00.747  		--rc genhtml_legend=1
00:29:00.747  		--rc geninfo_all_blocks=1
00:29:00.747  		--rc geninfo_unexecuted_blocks=1
00:29:00.747  		
00:29:00.747  		'
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:00.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:00.747  		--rc genhtml_branch_coverage=1
00:29:00.747  		--rc genhtml_function_coverage=1
00:29:00.747  		--rc genhtml_legend=1
00:29:00.747  		--rc geninfo_all_blocks=1
00:29:00.747  		--rc geninfo_unexecuted_blocks=1
00:29:00.747  		
00:29:00.747  		'
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:00.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:00.747  		--rc genhtml_branch_coverage=1
00:29:00.747  		--rc genhtml_function_coverage=1
00:29:00.747  		--rc genhtml_legend=1
00:29:00.747  		--rc geninfo_all_blocks=1
00:29:00.747  		--rc geninfo_unexecuted_blocks=1
00:29:00.747  		
00:29:00.747  		'
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:00.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:00.747  		--rc genhtml_branch_coverage=1
00:29:00.747  		--rc genhtml_function_coverage=1
00:29:00.747  		--rc genhtml_legend=1
00:29:00.747  		--rc geninfo_all_blocks=1
00:29:00.747  		--rc geninfo_unexecuted_blocks=1
00:29:00.747  		
00:29:00.747  		'
00:29:00.747   04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:00.747     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:29:00.747    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:29:00.748     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:29:00.748     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:00.748     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:00.748     04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:00.748      04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:00.748      04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:00.748      04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:00.748      04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH
00:29:00.748      04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:00.748    04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:00.748    04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable
00:29:00.748   04:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=()
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=()
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=()
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=()
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=()
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:02.653   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:29:02.654  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:29:02.654  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:29:02.654  Found net devices under 0000:0a:00.0: cvl_0_0
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:29:02.654  Found net devices under 0000:0a:00.1: cvl_0_1
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:29:02.654  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:02.654  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms
00:29:02.654  
00:29:02.654  --- 10.0.0.2 ping statistics ---
00:29:02.654  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:02.654  rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms
00:29:02.654   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:29:02.915  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:02.915  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms
00:29:02.915  
00:29:02.915  --- 10.0.0.1 ping statistics ---
00:29:02.915  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:02.915  rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test'
00:29:02.915  only one NIC for nvmf test
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:02.915  rmmod nvme_tcp
00:29:02.915  rmmod nvme_fabrics
00:29:02.915  rmmod nvme_keyring
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:02.915   04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:02.915    04:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:04.827    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:29:04.827  
00:29:04.827  real	0m4.565s
00:29:04.827  user	0m0.964s
00:29:04.827  sys	0m1.614s
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:04.827   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:29:04.827  ************************************
00:29:04.827  END TEST nvmf_target_multipath
00:29:04.827  ************************************
00:29:05.087   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode
00:29:05.087   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:29:05.087   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:05.087   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:29:05.087  ************************************
00:29:05.087  START TEST nvmf_zcopy
00:29:05.087  ************************************
00:29:05.087   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode
00:29:05.087  * Looking for test storage...
00:29:05.087  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-:
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-:
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<'
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:05.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:05.087  		--rc genhtml_branch_coverage=1
00:29:05.087  		--rc genhtml_function_coverage=1
00:29:05.087  		--rc genhtml_legend=1
00:29:05.087  		--rc geninfo_all_blocks=1
00:29:05.087  		--rc geninfo_unexecuted_blocks=1
00:29:05.087  		
00:29:05.087  		'
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:05.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:05.087  		--rc genhtml_branch_coverage=1
00:29:05.087  		--rc genhtml_function_coverage=1
00:29:05.087  		--rc genhtml_legend=1
00:29:05.087  		--rc geninfo_all_blocks=1
00:29:05.087  		--rc geninfo_unexecuted_blocks=1
00:29:05.087  		
00:29:05.087  		'
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:05.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:05.087  		--rc genhtml_branch_coverage=1
00:29:05.087  		--rc genhtml_function_coverage=1
00:29:05.087  		--rc genhtml_legend=1
00:29:05.087  		--rc geninfo_all_blocks=1
00:29:05.087  		--rc geninfo_unexecuted_blocks=1
00:29:05.087  		
00:29:05.087  		'
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:05.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:05.087  		--rc genhtml_branch_coverage=1
00:29:05.087  		--rc genhtml_function_coverage=1
00:29:05.087  		--rc genhtml_legend=1
00:29:05.087  		--rc geninfo_all_blocks=1
00:29:05.087  		--rc geninfo_unexecuted_blocks=1
00:29:05.087  		
00:29:05.087  		'
00:29:05.087   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:05.087     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:05.087    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:29:05.088     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob
00:29:05.088     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:05.088     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:05.088     04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:05.088      04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:05.088      04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:05.088      04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:05.088      04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH
00:29:05.088      04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:05.088    04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable
00:29:05.088   04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=()
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=()
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=()
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=()
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=()
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:29:07.618  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:29:07.618  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:29:07.618  Found net devices under 0000:0a:00.0: cvl_0_0
00:29:07.618   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:29:07.619  Found net devices under 0000:0a:00.1: cvl_0_1
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:29:07.619  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:07.619  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms
00:29:07.619  
00:29:07.619  --- 10.0.0.2 ping statistics ---
00:29:07.619  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:07.619  rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:29:07.619  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:07.619  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms
00:29:07.619  
00:29:07.619  --- 10.0.0.1 ping statistics ---
00:29:07.619  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:07.619  rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=381848
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 381848
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 381848 ']'
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:07.619  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:07.619   04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.619  [2024-12-09 04:19:35.930804] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:29:07.619  [2024-12-09 04:19:35.931868] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:29:07.619  [2024-12-09 04:19:35.931920] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:07.619  [2024-12-09 04:19:36.003380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:07.619  [2024-12-09 04:19:36.058631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:07.619  [2024-12-09 04:19:36.058689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:07.619  [2024-12-09 04:19:36.058718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:07.619  [2024-12-09 04:19:36.058730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:07.619  [2024-12-09 04:19:36.058740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:07.619  [2024-12-09 04:19:36.059296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:29:07.619  [2024-12-09 04:19:36.145952] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:29:07.619  [2024-12-09 04:19:36.146265] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:29:07.619   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:07.619   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0
00:29:07.619   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:07.619   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:07.619   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']'
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.878  [2024-12-09 04:19:36.203899] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.878  [2024-12-09 04:19:36.220052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.878  malloc0
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.878   04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192
00:29:07.878    04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json
00:29:07.878    04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:29:07.878    04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:29:07.878    04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:29:07.878    04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:29:07.878  {
00:29:07.878    "params": {
00:29:07.878      "name": "Nvme$subsystem",
00:29:07.878      "trtype": "$TEST_TRANSPORT",
00:29:07.878      "traddr": "$NVMF_FIRST_TARGET_IP",
00:29:07.878      "adrfam": "ipv4",
00:29:07.878      "trsvcid": "$NVMF_PORT",
00:29:07.878      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:29:07.878      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:29:07.878      "hdgst": ${hdgst:-false},
00:29:07.878      "ddgst": ${ddgst:-false}
00:29:07.878    },
00:29:07.878    "method": "bdev_nvme_attach_controller"
00:29:07.878  }
00:29:07.878  EOF
00:29:07.878  )")
00:29:07.878     04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:29:07.878    04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:29:07.878     04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:29:07.878     04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:29:07.878    "params": {
00:29:07.878      "name": "Nvme1",
00:29:07.878      "trtype": "tcp",
00:29:07.878      "traddr": "10.0.0.2",
00:29:07.878      "adrfam": "ipv4",
00:29:07.878      "trsvcid": "4420",
00:29:07.878      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:29:07.878      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:29:07.878      "hdgst": false,
00:29:07.878      "ddgst": false
00:29:07.878    },
00:29:07.878    "method": "bdev_nvme_attach_controller"
00:29:07.878  }'
00:29:07.878  [2024-12-09 04:19:36.303491] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:29:07.878  [2024-12-09 04:19:36.303573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381876 ]
00:29:07.878  [2024-12-09 04:19:36.383601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:08.136  [2024-12-09 04:19:36.461167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:08.136  Running I/O for 10 seconds...
00:29:10.437       5742.00 IOPS,    44.86 MiB/s
[2024-12-09T03:19:39.944Z]      5777.00 IOPS,    45.13 MiB/s
[2024-12-09T03:19:40.890Z]      5794.67 IOPS,    45.27 MiB/s
[2024-12-09T03:19:41.820Z]      5789.75 IOPS,    45.23 MiB/s
[2024-12-09T03:19:42.752Z]      5804.40 IOPS,    45.35 MiB/s
[2024-12-09T03:19:43.685Z]      5804.33 IOPS,    45.35 MiB/s
[2024-12-09T03:19:45.058Z]      5814.29 IOPS,    45.42 MiB/s
[2024-12-09T03:19:45.991Z]      5820.75 IOPS,    45.47 MiB/s
[2024-12-09T03:19:46.924Z]      5818.11 IOPS,    45.45 MiB/s
[2024-12-09T03:19:46.924Z]      5822.40 IOPS,    45.49 MiB/s
00:29:18.348                                                                                                  Latency(us)
00:29:18.348  
[2024-12-09T03:19:46.924Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:18.348  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192)
00:29:18.348  	 Verification LBA range: start 0x0 length 0x1000
00:29:18.348  	 Nvme1n1             :      10.02    5825.44      45.51       0.00     0.00   21908.86    2585.03   33204.91
00:29:18.348  
[2024-12-09T03:19:46.924Z]  ===================================================================================================================
00:29:18.348  
[2024-12-09T03:19:46.924Z]  Total                       :               5825.44      45.51       0.00     0.00   21908.86    2585.03   33204.91
00:29:18.348   04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=383058
00:29:18.348   04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable
00:29:18.348   04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:18.348    04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json
00:29:18.348   04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192
00:29:18.348    04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:29:18.348    04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:29:18.348    04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:29:18.348    04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:29:18.348  {
00:29:18.348    "params": {
00:29:18.348      "name": "Nvme$subsystem",
00:29:18.348      "trtype": "$TEST_TRANSPORT",
00:29:18.348      "traddr": "$NVMF_FIRST_TARGET_IP",
00:29:18.348      "adrfam": "ipv4",
00:29:18.348      "trsvcid": "$NVMF_PORT",
00:29:18.348      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:29:18.348      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:29:18.348      "hdgst": ${hdgst:-false},
00:29:18.348      "ddgst": ${ddgst:-false}
00:29:18.348    },
00:29:18.348    "method": "bdev_nvme_attach_controller"
00:29:18.348  }
00:29:18.348  EOF
00:29:18.348  )")
00:29:18.348  [2024-12-09 04:19:46.923853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.923900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607     04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:29:18.607    04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:29:18.607     04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:29:18.607     04:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:29:18.607    "params": {
00:29:18.607      "name": "Nvme1",
00:29:18.607      "trtype": "tcp",
00:29:18.607      "traddr": "10.0.0.2",
00:29:18.607      "adrfam": "ipv4",
00:29:18.607      "trsvcid": "4420",
00:29:18.607      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:29:18.607      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:29:18.607      "hdgst": false,
00:29:18.607      "ddgst": false
00:29:18.607    },
00:29:18.607    "method": "bdev_nvme_attach_controller"
00:29:18.607  }'
00:29:18.607  [2024-12-09 04:19:46.931757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.931778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.939759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.939787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.947761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.947782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.955774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.955795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.963758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.963778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.968847] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:29:18.607  [2024-12-09 04:19:46.968937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383058 ]
00:29:18.607  [2024-12-09 04:19:46.971753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.971773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.979753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.979772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.987787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.987807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:46.995753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:46.995771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:47.003754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:47.003773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.607  [2024-12-09 04:19:47.011759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.607  [2024-12-09 04:19:47.011779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.019755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.019775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.027755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.027774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.035757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.035776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.040909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:18.608  [2024-12-09 04:19:47.043758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.043777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.051812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.051854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.059788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.059821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.067756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.067775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.075771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.075790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.083756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.083785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.091769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.091788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.099755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.099774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.101516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:18.608  [2024-12-09 04:19:47.107758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.107778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.115773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.115797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.123808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.123851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.131800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.131841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.139805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.139847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.147804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.147844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.155813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.155848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.163795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.163837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.171762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.171783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.608  [2024-12-09 04:19:47.179805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.608  [2024-12-09 04:19:47.179842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.187801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.187836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.195811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.195848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.203759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.203778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.211761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.211782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.219783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.219817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.227764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.227787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.235764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.235786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.243764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.243787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.251761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.251781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.259761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.259782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.267763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.267786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.275762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.275783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.283764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.283785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.291766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.291789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.299763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.299786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.307761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.307783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.315759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.315779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.323759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.323779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.331759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.331778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.339774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.339799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.347759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.347779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.867  [2024-12-09 04:19:47.355762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.867  [2024-12-09 04:19:47.355783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.363759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.363779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.371760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.371785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.379759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.379779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.387761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.387785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.395759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.395780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.403761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.403782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.411761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.411781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.419761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.419780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.427761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.427782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:18.868  [2024-12-09 04:19:47.435774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:18.868  [2024-12-09 04:19:47.435801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.126  [2024-12-09 04:19:47.443764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.126  [2024-12-09 04:19:47.443788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.126  Running I/O for 5 seconds...
00:29:19.126  [2024-12-09 04:19:47.457922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.126  [2024-12-09 04:19:47.457953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.126  [2024-12-09 04:19:47.467682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.467709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.479882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.479908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.491014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.491039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.506232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.506280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.522085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.522110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.539474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.539501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.548762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.548786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.560388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.560415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.570149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.570187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.584942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.584985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.594602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.594627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.609838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.609864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.619780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.619805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.631125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.631150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.642052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.642076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.657519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.657560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.666874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.666899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.680407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.680433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.127  [2024-12-09 04:19:47.690388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.127  [2024-12-09 04:19:47.690414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.705508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.705537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.715122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.715150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.726921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.726946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.742396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.742423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.757198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.757225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.766309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.766335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.779811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.779836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.789311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.789338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.804763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.804802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.814326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.814353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.829099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.829123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.838726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.838751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.852453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.852480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.861600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.861626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.873186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.873211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.883639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.883680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.894186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.894210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.909710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.909737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.918666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.918691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.933091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.933117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.942317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.942354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.386  [2024-12-09 04:19:47.956671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.386  [2024-12-09 04:19:47.956699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.644  [2024-12-09 04:19:47.966910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.644  [2024-12-09 04:19:47.966950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.644  [2024-12-09 04:19:47.980787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:47.980812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:47.990170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:47.990195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.004157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.004183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.014015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.014040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.029902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.029928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.047782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.047822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.057898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.057926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.069659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.069687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.084241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.084269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.093495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.093522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.105157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.105182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.119378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.119406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.129105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.129130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.145320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.145348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.154901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.154928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.170710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.170735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.182926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.182953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.197144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.197171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.645  [2024-12-09 04:19:48.206613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.645  [2024-12-09 04:19:48.206654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.220900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.220927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.230688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.230728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.244227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.244255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.253618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.253658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.265303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.265344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.280962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.280989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.290263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.290312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.304737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.304761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.314638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.314665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.329505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.329532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.339201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.339225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.350992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.351017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.361870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.361895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.378549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.378590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.394382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.394409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.410283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.410311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.426073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.426113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.903  [2024-12-09 04:19:48.443597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.903  [2024-12-09 04:19:48.443637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.904  [2024-12-09 04:19:48.453243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.904  [2024-12-09 04:19:48.453290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:19.904      11743.00 IOPS,    91.74 MiB/s
[2024-12-09T03:19:48.480Z] [2024-12-09 04:19:48.465393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:19.904  [2024-12-09 04:19:48.465420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.482233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.162  [2024-12-09 04:19:48.482259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.498123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.162  [2024-12-09 04:19:48.498163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.513969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.162  [2024-12-09 04:19:48.514021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.523315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.162  [2024-12-09 04:19:48.523342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.534950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.162  [2024-12-09 04:19:48.534974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.549312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.162  [2024-12-09 04:19:48.549337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.558808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.162  [2024-12-09 04:19:48.558847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.162  [2024-12-09 04:19:48.572929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.572968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.582645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.582669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.596623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.596647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.606533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.606577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.620427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.620453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.629608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.629633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.641191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.641215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.652233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.652278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.662902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.662926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.675318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.675345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.685149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.685189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.696855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.696879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.707387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.707414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.718109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.718133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.163  [2024-12-09 04:19:48.733668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.163  [2024-12-09 04:19:48.733706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.743115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.743140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.754771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.754810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.767549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.767591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.777462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.777504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.789378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.789418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.803707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.803732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.813340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.813367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.829248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.829296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.838752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.838775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.853129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.853154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.863223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.863248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.875096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.875136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.885778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.885801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.900637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.900663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.909403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.909444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.921099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.921123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.931651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.931676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.942321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.942346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.954789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.954826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.970389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.970417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.421  [2024-12-09 04:19:48.986262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.421  [2024-12-09 04:19:48.986296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.001866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.001890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.018032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.018057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.027718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.027742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.039327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.039352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.050005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.050029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.063873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.063913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.073837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.073862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.088403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.088429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.097628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.097669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.112155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.112179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.121912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.121937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.133752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.133791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.148651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.148677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.158488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.158514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.174265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.679  [2024-12-09 04:19:49.174326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.679  [2024-12-09 04:19:49.191665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.680  [2024-12-09 04:19:49.191692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.680  [2024-12-09 04:19:49.201064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.680  [2024-12-09 04:19:49.201091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.680  [2024-12-09 04:19:49.216642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.680  [2024-12-09 04:19:49.216670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.680  [2024-12-09 04:19:49.225946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.680  [2024-12-09 04:19:49.225972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.680  [2024-12-09 04:19:49.241318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.680  [2024-12-09 04:19:49.241345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.680  [2024-12-09 04:19:49.250702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.680  [2024-12-09 04:19:49.250729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.264546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.264587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.274305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.274333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.290059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.290100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.305773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.305814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.315242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.315278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.326619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.326659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.341371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.341399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.350572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.350613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.365080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.365108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.374412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.374440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.389836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.389864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.399234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.399283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.410448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.410476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.426128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.426155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.435670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.435698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.447697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.447723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938      11749.50 IOPS,    91.79 MiB/s
[2024-12-09T03:19:49.514Z] [2024-12-09 04:19:49.458621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.458652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.473434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.473461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.483022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.483047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.494845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.938  [2024-12-09 04:19:49.494870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:20.938  [2024-12-09 04:19:49.505970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:20.939  [2024-12-09 04:19:49.505995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.518966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.518993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.528826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.528851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.540692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.540716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.551336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.551362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.563892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.563919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.573158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.573183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.584551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.584591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.197  [2024-12-09 04:19:49.594698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.197  [2024-12-09 04:19:49.594723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.610146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.610171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.626100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.626126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.635414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.635441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.647356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.647383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.658301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.658329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.673565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.673591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.683053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.683078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.694714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.694738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.707191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.707218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.721157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.721184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.730673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.730699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.745356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.745384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.754879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.754905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.198  [2024-12-09 04:19:49.769125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.198  [2024-12-09 04:19:49.769151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.778650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.778675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.792866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.792892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.802102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.802128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.813395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.813422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.823101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.823125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.836948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.836973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.846298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.846323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.857715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.857741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.868012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.868049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.878493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.878520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.893685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.893711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.903025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.903052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.914513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.914540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.930101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.930128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.939512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.939539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.950875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.950899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.961038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.961063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.971954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.971994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.982696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.982721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:49.997627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:49.997669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:50.016452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:50.016496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.456  [2024-12-09 04:19:50.026379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.456  [2024-12-09 04:19:50.026410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.039815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.039849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.049976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.050004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.061677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.061705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.077113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.077156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.086643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.086683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.101211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.101251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.118267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.118304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.133685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.133712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.142894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.142919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.156687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.156713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.166673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.166698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.180984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.181009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.191503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.191530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.201596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.201635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.216214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.216240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.226066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.226091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.237751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.237777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.251674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.251703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.261336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.261364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.272863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.272889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.714  [2024-12-09 04:19:50.283251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.714  [2024-12-09 04:19:50.283304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.296914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.296941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.306099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.306123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.321392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.321420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.331096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.331133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.342997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.343022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.358509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.358560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.373215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.373244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.382821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.382846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.396862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.396887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.406228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.406253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.418358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.418386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.434013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.434040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.449877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.449934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971      11767.33 IOPS,    91.93 MiB/s
[2024-12-09T03:19:50.547Z] [2024-12-09 04:19:50.459722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.459751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.471539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.471566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.482010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.482037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.971  [2024-12-09 04:19:50.497933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.971  [2024-12-09 04:19:50.497961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.972  [2024-12-09 04:19:50.515081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.972  [2024-12-09 04:19:50.515108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.972  [2024-12-09 04:19:50.524661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.972  [2024-12-09 04:19:50.524687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.972  [2024-12-09 04:19:50.536414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.972  [2024-12-09 04:19:50.536442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:21.972  [2024-12-09 04:19:50.546872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:21.972  [2024-12-09 04:19:50.546900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.560371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.560399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.569412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.569440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.581090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.581116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.597623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.597665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.607568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.607595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.619580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.619605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.630050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.630074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.644973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.644998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.654233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.654257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.665585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.665630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.682124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.682150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.699755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.699780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.709694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.709718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.721350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.721376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.737429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.737470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.746487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.231  [2024-12-09 04:19:50.746514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.231  [2024-12-09 04:19:50.762312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.232  [2024-12-09 04:19:50.762338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.232  [2024-12-09 04:19:50.777636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.232  [2024-12-09 04:19:50.777663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.232  [2024-12-09 04:19:50.787283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.232  [2024-12-09 04:19:50.787308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.232  [2024-12-09 04:19:50.799096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.232  [2024-12-09 04:19:50.799121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.490  [2024-12-09 04:19:50.809831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.809855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.825521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.825560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.834892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.834915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.848204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.848228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.857710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.857735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.869431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.869457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.884868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.884907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.894532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.894578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.908533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.908574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.917705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.917730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.929407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.929433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.945546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.945587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.954823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.954848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.969067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.969092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.988214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.988238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:50.998830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:50.998868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:51.012981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:51.013008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:51.022382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:51.022408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:51.036487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:51.036513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:51.046183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:51.046223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.491  [2024-12-09 04:19:51.060920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.491  [2024-12-09 04:19:51.060944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.079966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.079992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.091312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.091339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.102403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.102429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.118111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.118135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.133911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.133937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.149967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.149991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.159432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.159460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.170743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.170767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.184236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.184287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.193963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.193987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.207578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.207619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.216808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.216831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.228525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.228565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.238883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.238907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.252157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.252183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.261854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.261894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.273650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.273676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.289645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.289669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.298771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.298796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.312199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.312239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:22.747  [2024-12-09 04:19:51.321606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:22.747  [2024-12-09 04:19:51.321647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.333584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.333624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.350539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.350580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.365073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.365099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.374677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.374701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.388752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.388776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.398375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.398403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.412994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.413018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.422400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.422426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.436476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.436503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.446731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.446757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005      11783.50 IOPS,    92.06 MiB/s
[2024-12-09T03:19:51.581Z] [2024-12-09 04:19:51.461731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.461757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.476020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.476059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.486033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.486059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.497691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.497716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.513211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.513248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.522555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.522582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.536815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.536842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.547037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.547078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.559108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.559134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.005  [2024-12-09 04:19:51.571453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.005  [2024-12-09 04:19:51.571481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.580797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.580823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.592505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.592541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.603289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.603327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.613954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.613981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.629952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.629979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.639526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.639553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.651244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.651297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.663635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.663661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.673677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.673703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.685597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.685637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.701084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.701108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.710387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.710414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.724475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.724501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.262  [2024-12-09 04:19:51.734103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.262  [2024-12-09 04:19:51.734134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.748404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.748431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.757526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.757553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.769046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.769070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.779730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.779753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.790390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.790430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.802825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.802851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.817168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.817195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.263  [2024-12-09 04:19:51.826961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.263  [2024-12-09 04:19:51.826985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.840490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.840517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.849741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.849764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.861522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.861564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.876812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.876836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.886538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.886579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.900030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.900054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.909874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.909899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.924292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.924317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.933741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.933766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.945359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.945400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.961441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.961475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.971063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.971088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.983026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.983051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:51.994086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:51.994110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.008875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.008901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.018103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.018128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.032373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.032399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.041730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.041756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.053517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.053542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.069471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.069498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.079482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.079509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.520  [2024-12-09 04:19:52.091213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.520  [2024-12-09 04:19:52.091238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.101845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.101868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.117419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.117446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.126881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.126906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.141434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.141460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.151007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.151031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.162744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.162782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.176060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.176086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.185352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.185377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.197234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.197258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.213114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.213156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.223000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.223024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.235109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.235148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.250827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.250865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.262967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.262993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.272845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.272869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.284411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.284437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.294647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.294672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.310379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.310406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.328096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.328121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.338132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.338156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:23.777  [2024-12-09 04:19:52.352749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:23.777  [2024-12-09 04:19:52.352776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.362032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.362056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.378103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.378142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.393890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.393930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.403646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.403670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.415149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.415174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.428940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.428966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.438310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.438336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.452671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.452697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  [2024-12-09 04:19:52.463415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.463442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034      11777.00 IOPS,    92.01 MiB/s
[2024-12-09T03:19:52.610Z] [2024-12-09 04:19:52.472892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.034  [2024-12-09 04:19:52.472918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.034  
00:29:24.034                                                                                                  Latency(us)
00:29:24.034  
[2024-12-09T03:19:52.611Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:24.035  Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192)
00:29:24.035  	 Nvme1n1             :       5.01   11777.37      92.01       0.00     0.00   10854.44    3070.48   17767.54
00:29:24.035  
[2024-12-09T03:19:52.611Z]  ===================================================================================================================
00:29:24.035  
[2024-12-09T03:19:52.611Z]  Total                       :              11777.37      92.01       0.00     0.00   10854.44    3070.48   17767.54
00:29:24.035  [2024-12-09 04:19:52.479766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.479789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.487765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.487788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.495765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.495787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.503845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.503898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.511839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.511893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.519837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.519888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.527830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.527881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.535835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.535885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.543837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.543890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.551837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.551889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.559829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.559898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.567834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.567886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.575835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.575885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.583837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.583889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.599872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.599938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.035  [2024-12-09 04:19:52.607839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.035  [2024-12-09 04:19:52.607889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.615843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.615894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.623825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.623868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.631765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.631786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.639756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.639775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.647757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.647777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.655767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.655789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.663840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.663889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.671833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.671880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.679762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.679783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.687756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.687775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  [2024-12-09 04:19:52.695755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:29:24.293  [2024-12-09 04:19:52.695774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:24.293  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (383058) - No such process
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 383058
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:24.293  delay0
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:24.293   04:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1'
00:29:24.293  [2024-12-09 04:19:52.813667] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:29:32.400  Initializing NVMe Controllers
00:29:32.400  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:29:32.400  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:29:32.400  Initialization complete. Launching workers.
00:29:32.400  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 269, failed: 13657
00:29:32.400  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13826, failed to submit 100
00:29:32.400  	 success 13747, unsuccessful 79, failed 0
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:32.400   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:32.400  rmmod nvme_tcp
00:29:32.400  rmmod nvme_fabrics
00:29:32.401  rmmod nvme_keyring
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 381848 ']'
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 381848
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 381848 ']'
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 381848
00:29:32.401    04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:32.401    04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 381848
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 381848'
00:29:32.401  killing process with pid 381848
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 381848
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 381848
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:32.401   04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:32.401    04:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:29:33.431  
00:29:33.431  real	0m28.439s
00:29:33.431  user	0m40.546s
00:29:33.431  sys	0m9.771s
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:29:33.431  ************************************
00:29:33.431  END TEST nvmf_zcopy
00:29:33.431  ************************************
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:29:33.431  ************************************
00:29:33.431  START TEST nvmf_nmic
00:29:33.431  ************************************
00:29:33.431   04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode
00:29:33.431  * Looking for test storage...
00:29:33.431  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:29:33.431    04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:33.431     04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version
00:29:33.431     04:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-:
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-:
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<'
00:29:33.748    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:33.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.749  		--rc genhtml_branch_coverage=1
00:29:33.749  		--rc genhtml_function_coverage=1
00:29:33.749  		--rc genhtml_legend=1
00:29:33.749  		--rc geninfo_all_blocks=1
00:29:33.749  		--rc geninfo_unexecuted_blocks=1
00:29:33.749  		
00:29:33.749  		'
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:33.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.749  		--rc genhtml_branch_coverage=1
00:29:33.749  		--rc genhtml_function_coverage=1
00:29:33.749  		--rc genhtml_legend=1
00:29:33.749  		--rc geninfo_all_blocks=1
00:29:33.749  		--rc geninfo_unexecuted_blocks=1
00:29:33.749  		
00:29:33.749  		'
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:33.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.749  		--rc genhtml_branch_coverage=1
00:29:33.749  		--rc genhtml_function_coverage=1
00:29:33.749  		--rc genhtml_legend=1
00:29:33.749  		--rc geninfo_all_blocks=1
00:29:33.749  		--rc geninfo_unexecuted_blocks=1
00:29:33.749  		
00:29:33.749  		'
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:33.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.749  		--rc genhtml_branch_coverage=1
00:29:33.749  		--rc genhtml_function_coverage=1
00:29:33.749  		--rc genhtml_legend=1
00:29:33.749  		--rc geninfo_all_blocks=1
00:29:33.749  		--rc geninfo_unexecuted_blocks=1
00:29:33.749  		
00:29:33.749  		'
00:29:33.749   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:33.749    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:33.749     04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:33.749      04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:33.749      04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:33.749      04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:33.749      04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH
00:29:33.750      04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:33.750    04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable
00:29:33.750   04:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=()
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=()
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=()
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=()
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=()
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:29:35.737  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:29:35.737  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:29:35.737  Found net devices under 0000:0a:00.0: cvl_0_0
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:29:35.737  Found net devices under 0000:0a:00.1: cvl_0_1
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:29:35.737   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:29:35.738   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:29:35.995  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:35.995  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms
00:29:35.995  
00:29:35.995  --- 10.0.0.2 ping statistics ---
00:29:35.995  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:35.995  rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:29:35.995  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:35.995  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms
00:29:35.995  
00:29:35.995  --- 10.0.0.1 ping statistics ---
00:29:35.995  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:35.995  rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=386563
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 386563
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 386563 ']'
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:35.995  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:35.995   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:35.995  [2024-12-09 04:20:04.446321] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:29:35.995  [2024-12-09 04:20:04.447335] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:29:35.995  [2024-12-09 04:20:04.447386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:35.995  [2024-12-09 04:20:04.517616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:29:36.254  [2024-12-09 04:20:04.578127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:36.254  [2024-12-09 04:20:04.578174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:36.254  [2024-12-09 04:20:04.578203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:36.254  [2024-12-09 04:20:04.578218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:36.254  [2024-12-09 04:20:04.578228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:36.254  [2024-12-09 04:20:04.579806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:29:36.254  [2024-12-09 04:20:04.579871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:29:36.254  [2024-12-09 04:20:04.579940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:36.254  [2024-12-09 04:20:04.579936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:29:36.254  [2024-12-09 04:20:04.668020] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:29:36.254  [2024-12-09 04:20:04.668227] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:29:36.254  [2024-12-09 04:20:04.668560] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:29:36.254  [2024-12-09 04:20:04.669114] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:29:36.254  [2024-12-09 04:20:04.669357] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.254  [2024-12-09 04:20:04.716621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.254  Malloc0
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.254  [2024-12-09 04:20:04.780816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.254   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:29:36.254  test case1: single bdev can't be used in multiple subsystems
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.255  [2024-12-09 04:20:04.804533] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:29:36.255  [2024-12-09 04:20:04.804576] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:29:36.255  [2024-12-09 04:20:04.804592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:29:36.255  request:
00:29:36.255  {
00:29:36.255  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:29:36.255  "namespace": {
00:29:36.255  "bdev_name": "Malloc0",
00:29:36.255  "no_auto_visible": false,
00:29:36.255  "hide_metadata": false
00:29:36.255  },
00:29:36.255  "method": "nvmf_subsystem_add_ns",
00:29:36.255  "req_id": 1
00:29:36.255  }
00:29:36.255  Got JSON-RPC error response
00:29:36.255  response:
00:29:36.255  {
00:29:36.255  "code": -32602,
00:29:36.255  "message": "Invalid parameters"
00:29:36.255  }
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:29:36.255   Adding namespace failed - expected result.
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:29:36.255  test case2: host connect to nvmf target in multiple paths
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:36.255  [2024-12-09 04:20:04.812645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.255   04:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:29:36.513   04:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421
00:29:36.770   04:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:29:36.770   04:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0
00:29:36.770   04:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:29:36.770   04:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:29:36.770   04:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2
00:29:38.665   04:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:29:38.665    04:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:29:38.665    04:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:29:38.665   04:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:29:38.665   04:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:29:38.665   04:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0
00:29:38.665   04:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:29:38.922  [global]
00:29:38.922  thread=1
00:29:38.922  invalidate=1
00:29:38.922  rw=write
00:29:38.922  time_based=1
00:29:38.922  runtime=1
00:29:38.922  ioengine=libaio
00:29:38.922  direct=1
00:29:38.922  bs=4096
00:29:38.922  iodepth=1
00:29:38.922  norandommap=0
00:29:38.922  numjobs=1
00:29:38.922  
00:29:38.922  verify_dump=1
00:29:38.922  verify_backlog=512
00:29:38.922  verify_state_save=0
00:29:38.922  do_verify=1
00:29:38.922  verify=crc32c-intel
00:29:38.922  [job0]
00:29:38.922  filename=/dev/nvme0n1
00:29:38.922  Could not set queue depth (nvme0n1)
00:29:38.922  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:38.922  fio-3.35
00:29:38.922  Starting 1 thread
00:29:40.295  
00:29:40.295  job0: (groupid=0, jobs=1): err= 0: pid=386940: Mon Dec  9 04:20:08 2024
00:29:40.295    read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec)
00:29:40.295      slat (nsec): min=7638, max=16969, avg=15052.73, stdev=1705.73
00:29:40.295      clat (usec): min=40913, max=41043, avg=40978.37, stdev=32.47
00:29:40.295       lat (usec): min=40929, max=41058, avg=40993.42, stdev=33.00
00:29:40.295      clat percentiles (usec):
00:29:40.295       |  1.00th=[41157],  5.00th=[41157], 10.00th=[41157], 20.00th=[41157],
00:29:40.295       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:29:40.295       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:29:40.295       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:29:40.295       | 99.99th=[41157]
00:29:40.295    write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets
00:29:40.295      slat (usec): min=7, max=28712, avg=67.68, stdev=1268.44
00:29:40.295      clat (usec): min=143, max=339, avg=193.58, stdev=41.92
00:29:40.295       lat (usec): min=152, max=28947, avg=261.26, stdev=1270.98
00:29:40.295      clat percentiles (usec):
00:29:40.295       |  1.00th=[  149],  5.00th=[  151], 10.00th=[  151], 20.00th=[  153],
00:29:40.295       | 30.00th=[  157], 40.00th=[  159], 50.00th=[  169], 60.00th=[  225],
00:29:40.295       | 70.00th=[  235], 80.00th=[  241], 90.00th=[  249], 95.00th=[  253],
00:29:40.295       | 99.00th=[  258], 99.50th=[  265], 99.90th=[  338], 99.95th=[  338],
00:29:40.295       | 99.99th=[  338]
00:29:40.295     bw (  KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1
00:29:40.295     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:29:40.295    lat (usec)   : 250=87.08%, 500=8.80%
00:29:40.295    lat (msec)   : 50=4.12%
00:29:40.295    cpu          : usr=0.58%, sys=0.58%, ctx=537, majf=0, minf=1
00:29:40.295    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:40.296       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:40.296       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:40.296       issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:40.296       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:40.296  
00:29:40.296  Run status group 0 (all jobs):
00:29:40.296     READ: bw=84.8KiB/s (86.8kB/s), 84.8KiB/s-84.8KiB/s (86.8kB/s-86.8kB/s), io=88.0KiB (90.1kB), run=1038-1038msec
00:29:40.296    WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec
00:29:40.296  
00:29:40.296  Disk stats (read/write):
00:29:40.296    nvme0n1: ios=44/512, merge=0/0, ticks=1724/86, in_queue=1810, util=98.70%
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:29:40.296  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:40.296  rmmod nvme_tcp
00:29:40.296  rmmod nvme_fabrics
00:29:40.296  rmmod nvme_keyring
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 386563 ']'
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 386563
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 386563 ']'
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 386563
00:29:40.296    04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:40.296    04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386563
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386563'
00:29:40.296  killing process with pid 386563
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 386563
00:29:40.296   04:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 386563
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:40.555   04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:40.555    04:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:29:43.144  
00:29:43.144  real	0m9.254s
00:29:43.144  user	0m17.192s
00:29:43.144  sys	0m3.359s
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:29:43.144  ************************************
00:29:43.144  END TEST nvmf_nmic
00:29:43.144  ************************************
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:29:43.144  ************************************
00:29:43.144  START TEST nvmf_fio_target
00:29:43.144  ************************************
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode
00:29:43.144  * Looking for test storage...
00:29:43.144  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-:
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-:
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<'
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:43.144  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:43.144  		--rc genhtml_branch_coverage=1
00:29:43.144  		--rc genhtml_function_coverage=1
00:29:43.144  		--rc genhtml_legend=1
00:29:43.144  		--rc geninfo_all_blocks=1
00:29:43.144  		--rc geninfo_unexecuted_blocks=1
00:29:43.144  		
00:29:43.144  		'
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:43.144  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:43.144  		--rc genhtml_branch_coverage=1
00:29:43.144  		--rc genhtml_function_coverage=1
00:29:43.144  		--rc genhtml_legend=1
00:29:43.144  		--rc geninfo_all_blocks=1
00:29:43.144  		--rc geninfo_unexecuted_blocks=1
00:29:43.144  		
00:29:43.144  		'
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:43.144  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:43.144  		--rc genhtml_branch_coverage=1
00:29:43.144  		--rc genhtml_function_coverage=1
00:29:43.144  		--rc genhtml_legend=1
00:29:43.144  		--rc geninfo_all_blocks=1
00:29:43.144  		--rc geninfo_unexecuted_blocks=1
00:29:43.144  		
00:29:43.144  		'
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:43.144  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:43.144  		--rc genhtml_branch_coverage=1
00:29:43.144  		--rc genhtml_function_coverage=1
00:29:43.144  		--rc genhtml_legend=1
00:29:43.144  		--rc geninfo_all_blocks=1
00:29:43.144  		--rc geninfo_unexecuted_blocks=1
00:29:43.144  		
00:29:43.144  		'
00:29:43.144   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:29:43.144     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:43.144    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:43.145     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:29:43.145     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob
00:29:43.145     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:43.145     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:43.145     04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:43.145      04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:43.145      04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:43.145      04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:43.145      04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH
00:29:43.145      04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:43.145    04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable
00:29:43.145   04:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=()
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=()
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=()
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=()
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=()
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:29:45.046  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:29:45.046  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:45.046   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:29:45.047  Found net devices under 0000:0a:00.0: cvl_0_0
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:29:45.047  Found net devices under 0000:0a:00.1: cvl_0_1
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:29:45.047   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:29:45.305  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:45.305  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms
00:29:45.305  
00:29:45.305  --- 10.0.0.2 ping statistics ---
00:29:45.305  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:45.305  rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:29:45.305  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:45.305  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms
00:29:45.305  
00:29:45.305  --- 10.0.0.1 ping statistics ---
00:29:45.305  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:45.305  rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:45.305   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=389141
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 389141
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 389141 ']'
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:45.306  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:45.306   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:29:45.306  [2024-12-09 04:20:13.731667] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:29:45.306  [2024-12-09 04:20:13.732707] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:29:45.306  [2024-12-09 04:20:13.732779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:45.306  [2024-12-09 04:20:13.805191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:29:45.306  [2024-12-09 04:20:13.859496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:45.306  [2024-12-09 04:20:13.859554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:45.306  [2024-12-09 04:20:13.859578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:45.306  [2024-12-09 04:20:13.859589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:45.306  [2024-12-09 04:20:13.859599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:45.306  [2024-12-09 04:20:13.861148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:29:45.306  [2024-12-09 04:20:13.861210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:29:45.306  [2024-12-09 04:20:13.861291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:29:45.306  [2024-12-09 04:20:13.861310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:45.563  [2024-12-09 04:20:13.946606] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:29:45.563  [2024-12-09 04:20:13.946826] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:29:45.563  [2024-12-09 04:20:13.947114] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:29:45.563  [2024-12-09 04:20:13.947800] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:29:45.563  [2024-12-09 04:20:13.947998] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:29:45.563   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:45.563   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0
00:29:45.563   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:45.563   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:45.563   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:29:45.563   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:45.563   04:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:29:45.819  [2024-12-09 04:20:14.302044] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:29:45.819    04:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:29:46.385   04:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:29:46.385    04:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:29:46.385   04:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:29:46.385    04:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:29:46.949   04:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:29:46.949    04:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:29:47.206   04:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:29:47.206   04:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:29:47.464    04:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:29:47.721   04:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:29:47.721    04:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:29:47.978   04:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:29:47.978    04:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:29:48.235   04:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:29:48.235   04:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:29:48.493   04:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:29:48.751   04:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:29:48.751   04:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:29:49.008   04:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:29:49.008   04:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:29:49.265   04:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:29:49.522  [2024-12-09 04:20:18.086204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:29:49.779   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:29:50.036   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:29:50.294   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:29:50.552   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:29:50.552   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0
00:29:50.552   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:29:50.552   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]]
00:29:50.552   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4
00:29:50.552   04:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2
00:29:52.447   04:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:29:52.447    04:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:29:52.447    04:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:29:52.447   04:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4
00:29:52.447   04:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:29:52.447   04:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0
00:29:52.447   04:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:29:52.447  [global]
00:29:52.447  thread=1
00:29:52.447  invalidate=1
00:29:52.447  rw=write
00:29:52.447  time_based=1
00:29:52.447  runtime=1
00:29:52.447  ioengine=libaio
00:29:52.447  direct=1
00:29:52.447  bs=4096
00:29:52.447  iodepth=1
00:29:52.447  norandommap=0
00:29:52.447  numjobs=1
00:29:52.447  
00:29:52.447  verify_dump=1
00:29:52.447  verify_backlog=512
00:29:52.447  verify_state_save=0
00:29:52.447  do_verify=1
00:29:52.447  verify=crc32c-intel
00:29:52.447  [job0]
00:29:52.447  filename=/dev/nvme0n1
00:29:52.447  [job1]
00:29:52.447  filename=/dev/nvme0n2
00:29:52.447  [job2]
00:29:52.447  filename=/dev/nvme0n3
00:29:52.447  [job3]
00:29:52.447  filename=/dev/nvme0n4
00:29:52.447  Could not set queue depth (nvme0n1)
00:29:52.447  Could not set queue depth (nvme0n2)
00:29:52.447  Could not set queue depth (nvme0n3)
00:29:52.447  Could not set queue depth (nvme0n4)
00:29:52.704  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:52.704  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:52.704  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:52.704  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:52.704  fio-3.35
00:29:52.704  Starting 4 threads
00:29:54.076  
00:29:54.076  job0: (groupid=0, jobs=1): err= 0: pid=390088: Mon Dec  9 04:20:22 2024
00:29:54.076    read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec)
00:29:54.076      slat (nsec): min=6491, max=34648, avg=23036.04, stdev=10339.73
00:29:54.076      clat (usec): min=299, max=41991, avg=39381.89, stdev=8528.36
00:29:54.076       lat (usec): min=306, max=42007, avg=39404.92, stdev=8531.94
00:29:54.076      clat percentiles (usec):
00:29:54.076       |  1.00th=[  302],  5.00th=[40633], 10.00th=[41157], 20.00th=[41157],
00:29:54.076       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:29:54.077       | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206],
00:29:54.077       | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:29:54.077       | 99.99th=[42206]
00:29:54.077    write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets
00:29:54.077      slat (nsec): min=7852, max=34608, avg=10445.86, stdev=3071.60
00:29:54.077      clat (usec): min=167, max=259, avg=224.50, stdev=25.91
00:29:54.077       lat (usec): min=179, max=273, avg=234.95, stdev=26.05
00:29:54.077      clat percentiles (usec):
00:29:54.077       |  1.00th=[  174],  5.00th=[  178], 10.00th=[  182], 20.00th=[  192],
00:29:54.077       | 30.00th=[  210], 40.00th=[  233], 50.00th=[  239], 60.00th=[  241],
00:29:54.077       | 70.00th=[  243], 80.00th=[  245], 90.00th=[  249], 95.00th=[  251],
00:29:54.077       | 99.00th=[  258], 99.50th=[  258], 99.90th=[  260], 99.95th=[  260],
00:29:54.077       | 99.99th=[  260]
00:29:54.077     bw (  KiB/s): min= 4096, max= 4096, per=23.09%, avg=4096.00, stdev= 0.00, samples=1
00:29:54.077     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:29:54.077    lat (usec)   : 250=90.65%, 500=5.23%
00:29:54.077    lat (msec)   : 50=4.11%
00:29:54.077    cpu          : usr=0.49%, sys=0.58%, ctx=536, majf=0, minf=1
00:29:54.077    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:54.077       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:54.077       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:54.077  job1: (groupid=0, jobs=1): err= 0: pid=390090: Mon Dec  9 04:20:22 2024
00:29:54.077    read: IOPS=1720, BW=6881KiB/s (7046kB/s)(6888KiB/1001msec)
00:29:54.077      slat (nsec): min=4460, max=54680, avg=15680.59, stdev=7047.19
00:29:54.077      clat (usec): min=183, max=1108, avg=300.23, stdev=106.26
00:29:54.077       lat (usec): min=193, max=1126, avg=315.91, stdev=108.75
00:29:54.077      clat percentiles (usec):
00:29:54.077       |  1.00th=[  198],  5.00th=[  204], 10.00th=[  210], 20.00th=[  227],
00:29:54.077       | 30.00th=[  237], 40.00th=[  241], 50.00th=[  249], 60.00th=[  265],
00:29:54.077       | 70.00th=[  326], 80.00th=[  388], 90.00th=[  474], 95.00th=[  515],
00:29:54.077       | 99.00th=[  611], 99.50th=[  644], 99.90th=[  824], 99.95th=[ 1106],
00:29:54.077       | 99.99th=[ 1106]
00:29:54.077    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:29:54.077      slat (nsec): min=5707, max=64279, avg=16578.44, stdev=7567.74
00:29:54.077      clat (usec): min=137, max=990, avg=197.55, stdev=44.87
00:29:54.077       lat (usec): min=144, max=998, avg=214.13, stdev=44.14
00:29:54.077      clat percentiles (usec):
00:29:54.077       |  1.00th=[  149],  5.00th=[  151], 10.00th=[  153], 20.00th=[  163],
00:29:54.077       | 30.00th=[  176], 40.00th=[  182], 50.00th=[  188], 60.00th=[  192],
00:29:54.077       | 70.00th=[  200], 80.00th=[  237], 90.00th=[  265], 95.00th=[  281],
00:29:54.077       | 99.00th=[  318], 99.50th=[  338], 99.90th=[  404], 99.95th=[  408],
00:29:54.077       | 99.99th=[  988]
00:29:54.077     bw (  KiB/s): min= 8192, max= 8192, per=46.18%, avg=8192.00, stdev= 0.00, samples=1
00:29:54.077     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:29:54.077    lat (usec)   : 250=69.60%, 500=27.53%, 750=2.73%, 1000=0.11%
00:29:54.077    lat (msec)   : 2=0.03%
00:29:54.077    cpu          : usr=3.80%, sys=8.30%, ctx=3772, majf=0, minf=1
00:29:54.077    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:54.077       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       issued rwts: total=1722,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:54.077       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:54.077  job2: (groupid=0, jobs=1): err= 0: pid=390091: Mon Dec  9 04:20:22 2024
00:29:54.077    read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec)
00:29:54.077      slat (nsec): min=6190, max=38527, avg=25136.96, stdev=10589.04
00:29:54.077      clat (usec): min=347, max=41315, avg=39211.48, stdev=8472.55
00:29:54.077       lat (usec): min=366, max=41321, avg=39236.62, stdev=8473.76
00:29:54.077      clat percentiles (usec):
00:29:54.077       |  1.00th=[  347],  5.00th=[40633], 10.00th=[41157], 20.00th=[41157],
00:29:54.077       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:29:54.077       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:29:54.077       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:29:54.077       | 99.99th=[41157]
00:29:54.077    write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets
00:29:54.077      slat (nsec): min=7047, max=84245, avg=11501.87, stdev=5183.63
00:29:54.077      clat (usec): min=156, max=400, avg=250.70, stdev=33.98
00:29:54.077       lat (usec): min=190, max=410, avg=262.21, stdev=33.20
00:29:54.077      clat percentiles (usec):
00:29:54.077       |  1.00th=[  180],  5.00th=[  192], 10.00th=[  200], 20.00th=[  223],
00:29:54.077       | 30.00th=[  243], 40.00th=[  249], 50.00th=[  253], 60.00th=[  260],
00:29:54.077       | 70.00th=[  265], 80.00th=[  273], 90.00th=[  285], 95.00th=[  302],
00:29:54.077       | 99.00th=[  347], 99.50th=[  371], 99.90th=[  400], 99.95th=[  400],
00:29:54.077       | 99.99th=[  400]
00:29:54.077     bw (  KiB/s): min= 4096, max= 4096, per=23.09%, avg=4096.00, stdev= 0.00, samples=1
00:29:54.077     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:29:54.077    lat (usec)   : 250=41.12%, 500=54.77%
00:29:54.077    lat (msec)   : 50=4.11%
00:29:54.077    cpu          : usr=0.48%, sys=0.67%, ctx=535, majf=0, minf=1
00:29:54.077    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:54.077       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:54.077       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:54.077  job3: (groupid=0, jobs=1): err= 0: pid=390092: Mon Dec  9 04:20:22 2024
00:29:54.077    read: IOPS=1512, BW=6050KiB/s (6195kB/s)(6056KiB/1001msec)
00:29:54.077      slat (nsec): min=5457, max=60946, avg=16445.15, stdev=8388.49
00:29:54.077      clat (usec): min=201, max=41116, avg=406.14, stdev=2106.20
00:29:54.077       lat (usec): min=207, max=41123, avg=422.59, stdev=2106.31
00:29:54.077      clat percentiles (usec):
00:29:54.077       |  1.00th=[  235],  5.00th=[  243], 10.00th=[  247], 20.00th=[  251],
00:29:54.077       | 30.00th=[  260], 40.00th=[  265], 50.00th=[  273], 60.00th=[  281],
00:29:54.077       | 70.00th=[  289], 80.00th=[  297], 90.00th=[  343], 95.00th=[  461],
00:29:54.077       | 99.00th=[  537], 99.50th=[  635], 99.90th=[41157], 99.95th=[41157],
00:29:54.077       | 99.99th=[41157]
00:29:54.077    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:29:54.077      slat (nsec): min=6601, max=69049, avg=15860.67, stdev=7936.70
00:29:54.077      clat (usec): min=154, max=912, avg=209.85, stdev=53.46
00:29:54.077       lat (usec): min=166, max=927, avg=225.71, stdev=56.70
00:29:54.077      clat percentiles (usec):
00:29:54.077       |  1.00th=[  163],  5.00th=[  169], 10.00th=[  174], 20.00th=[  178],
00:29:54.077       | 30.00th=[  182], 40.00th=[  186], 50.00th=[  192], 60.00th=[  198],
00:29:54.077       | 70.00th=[  217], 80.00th=[  235], 90.00th=[  260], 95.00th=[  318],
00:29:54.077       | 99.00th=[  404], 99.50th=[  441], 99.90th=[  709], 99.95th=[  914],
00:29:54.077       | 99.99th=[  914]
00:29:54.077     bw (  KiB/s): min= 6008, max= 6008, per=33.87%, avg=6008.00, stdev= 0.00, samples=1
00:29:54.077     iops        : min= 1502, max= 1502, avg=1502.00, stdev= 0.00, samples=1
00:29:54.077    lat (usec)   : 250=53.25%, 500=45.34%, 750=1.15%, 1000=0.03%
00:29:54.077    lat (msec)   : 4=0.03%, 10=0.07%, 50=0.13%
00:29:54.077    cpu          : usr=2.40%, sys=5.30%, ctx=3051, majf=0, minf=1
00:29:54.077    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:54.077       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:54.077       issued rwts: total=1514,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:54.077       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:54.077  
00:29:54.077  Run status group 0 (all jobs):
00:29:54.077     READ: bw=12.3MiB/s (12.9MB/s), 88.5KiB/s-6881KiB/s (90.7kB/s-7046kB/s), io=12.8MiB (13.4MB), run=1001-1039msec
00:29:54.077    WRITE: bw=17.3MiB/s (18.2MB/s), 1971KiB/s-8184KiB/s (2018kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1039msec
00:29:54.077  
00:29:54.077  Disk stats (read/write):
00:29:54.077    nvme0n1: ios=70/512, merge=0/0, ticks=975/111, in_queue=1086, util=97.90%
00:29:54.077    nvme0n2: ios=1501/1536, merge=0/0, ticks=1033/303, in_queue=1336, util=98.37%
00:29:54.077    nvme0n3: ios=18/512, merge=0/0, ticks=698/120, in_queue=818, util=88.92%
00:29:54.077    nvme0n4: ios=1097/1536, merge=0/0, ticks=1033/318, in_queue=1351, util=98.21%
00:29:54.077   04:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:29:54.077  [global]
00:29:54.077  thread=1
00:29:54.077  invalidate=1
00:29:54.077  rw=randwrite
00:29:54.077  time_based=1
00:29:54.077  runtime=1
00:29:54.077  ioengine=libaio
00:29:54.077  direct=1
00:29:54.077  bs=4096
00:29:54.077  iodepth=1
00:29:54.077  norandommap=0
00:29:54.077  numjobs=1
00:29:54.077  
00:29:54.077  verify_dump=1
00:29:54.077  verify_backlog=512
00:29:54.077  verify_state_save=0
00:29:54.077  do_verify=1
00:29:54.077  verify=crc32c-intel
00:29:54.077  [job0]
00:29:54.077  filename=/dev/nvme0n1
00:29:54.077  [job1]
00:29:54.077  filename=/dev/nvme0n2
00:29:54.077  [job2]
00:29:54.077  filename=/dev/nvme0n3
00:29:54.077  [job3]
00:29:54.077  filename=/dev/nvme0n4
00:29:54.077  Could not set queue depth (nvme0n1)
00:29:54.078  Could not set queue depth (nvme0n2)
00:29:54.078  Could not set queue depth (nvme0n3)
00:29:54.078  Could not set queue depth (nvme0n4)
00:29:54.078  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:54.078  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:54.078  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:54.078  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:54.078  fio-3.35
00:29:54.078  Starting 4 threads
00:29:55.451  
00:29:55.451  job0: (groupid=0, jobs=1): err= 0: pid=390435: Mon Dec  9 04:20:23 2024
00:29:55.451    read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec)
00:29:55.451      slat (nsec): min=7945, max=46405, avg=22946.00, stdev=11119.32
00:29:55.451      clat (usec): min=40697, max=42005, avg=41729.82, stdev=455.89
00:29:55.451       lat (usec): min=40705, max=42020, avg=41752.77, stdev=456.70
00:29:55.451      clat percentiles (usec):
00:29:55.451       |  1.00th=[40633],  5.00th=[41157], 10.00th=[41157], 20.00th=[41157],
00:29:55.451       | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206],
00:29:55.451       | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206],
00:29:55.451       | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:29:55.451       | 99.99th=[42206]
00:29:55.451    write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets
00:29:55.451      slat (nsec): min=7114, max=31830, avg=8672.88, stdev=2183.16
00:29:55.451      clat (usec): min=184, max=276, avg=227.42, stdev=14.95
00:29:55.451       lat (usec): min=192, max=286, avg=236.09, stdev=15.04
00:29:55.451      clat percentiles (usec):
00:29:55.451       |  1.00th=[  190],  5.00th=[  196], 10.00th=[  210], 20.00th=[  219],
00:29:55.451       | 30.00th=[  221], 40.00th=[  225], 50.00th=[  229], 60.00th=[  231],
00:29:55.451       | 70.00th=[  235], 80.00th=[  241], 90.00th=[  245], 95.00th=[  253],
00:29:55.451       | 99.00th=[  260], 99.50th=[  265], 99.90th=[  277], 99.95th=[  277],
00:29:55.451       | 99.99th=[  277]
00:29:55.451     bw (  KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1
00:29:55.451     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:29:55.451    lat (usec)   : 250=90.45%, 500=5.43%
00:29:55.451    lat (msec)   : 50=4.12%
00:29:55.451    cpu          : usr=0.48%, sys=0.48%, ctx=535, majf=0, minf=2
00:29:55.451    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:55.451       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.451       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.451       issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:55.451       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:55.451  job1: (groupid=0, jobs=1): err= 0: pid=390436: Mon Dec  9 04:20:23 2024
00:29:55.451    read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec)
00:29:55.451      slat (nsec): min=7023, max=34992, avg=21983.64, stdev=9985.87
00:29:55.451      clat (usec): min=40892, max=41083, avg=40975.06, stdev=49.69
00:29:55.451       lat (usec): min=40927, max=41092, avg=40997.05, stdev=45.35
00:29:55.451      clat percentiles (usec):
00:29:55.451       |  1.00th=[40633],  5.00th=[41157], 10.00th=[41157], 20.00th=[41157],
00:29:55.451       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:29:55.451       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:29:55.451       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:29:55.451       | 99.99th=[41157]
00:29:55.451    write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets
00:29:55.451      slat (nsec): min=6193, max=25675, avg=7462.60, stdev=1643.51
00:29:55.451      clat (usec): min=157, max=353, avg=228.71, stdev=15.69
00:29:55.451       lat (usec): min=164, max=378, avg=236.17, stdev=15.95
00:29:55.451      clat percentiles (usec):
00:29:55.451       |  1.00th=[  190],  5.00th=[  198], 10.00th=[  215], 20.00th=[  221],
00:29:55.451       | 30.00th=[  225], 40.00th=[  227], 50.00th=[  229], 60.00th=[  231],
00:29:55.451       | 70.00th=[  235], 80.00th=[  239], 90.00th=[  245], 95.00th=[  251],
00:29:55.451       | 99.00th=[  262], 99.50th=[  269], 99.90th=[  355], 99.95th=[  355],
00:29:55.451       | 99.99th=[  355]
00:29:55.451     bw (  KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1
00:29:55.451     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:29:55.451    lat (usec)   : 250=90.64%, 500=5.24%
00:29:55.451    lat (msec)   : 50=4.12%
00:29:55.451    cpu          : usr=0.10%, sys=0.49%, ctx=535, majf=0, minf=1
00:29:55.451    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:55.451       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.451       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.451       issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:55.451       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:55.451  job2: (groupid=0, jobs=1): err= 0: pid=390437: Mon Dec  9 04:20:23 2024
00:29:55.451    read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec)
00:29:55.451      slat (nsec): min=6580, max=39119, avg=24682.95, stdev=10673.30
00:29:55.451      clat (usec): min=40914, max=41056, avg=40971.30, stdev=35.41
00:29:55.451       lat (usec): min=40948, max=41062, avg=40995.99, stdev=29.48
00:29:55.451      clat percentiles (usec):
00:29:55.451       |  1.00th=[41157],  5.00th=[41157], 10.00th=[41157], 20.00th=[41157],
00:29:55.451       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:29:55.451       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:29:55.451       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:29:55.451       | 99.99th=[41157]
00:29:55.451    write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets
00:29:55.451      slat (nsec): min=6219, max=27122, avg=7456.31, stdev=2123.37
00:29:55.451      clat (usec): min=159, max=416, avg=195.87, stdev=33.02
00:29:55.451       lat (usec): min=166, max=424, avg=203.32, stdev=33.48
00:29:55.451      clat percentiles (usec):
00:29:55.451       |  1.00th=[  163],  5.00th=[  167], 10.00th=[  169], 20.00th=[  174],
00:29:55.451       | 30.00th=[  176], 40.00th=[  178], 50.00th=[  182], 60.00th=[  188],
00:29:55.451       | 70.00th=[  196], 80.00th=[  233], 90.00th=[  241], 95.00th=[  251],
00:29:55.451       | 99.00th=[  269], 99.50th=[  375], 99.90th=[  416], 99.95th=[  416],
00:29:55.451       | 99.99th=[  416]
00:29:55.451     bw (  KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1
00:29:55.451     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:29:55.451    lat (usec)   : 250=90.64%, 500=5.24%
00:29:55.451    lat (msec)   : 50=4.12%
00:29:55.451    cpu          : usr=0.30%, sys=0.30%, ctx=535, majf=0, minf=1
00:29:55.451    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:55.451       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.451       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.451       issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:55.451       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:55.451  job3: (groupid=0, jobs=1): err= 0: pid=390440: Mon Dec  9 04:20:23 2024
00:29:55.452    read: IOPS=31, BW=126KiB/s (129kB/s)(128KiB/1019msec)
00:29:55.452      slat (nsec): min=7321, max=37277, avg=23709.50, stdev=9176.37
00:29:55.452      clat (usec): min=308, max=41334, avg=28198.79, stdev=19052.86
00:29:55.452       lat (usec): min=344, max=41341, avg=28222.50, stdev=19045.31
00:29:55.452      clat percentiles (usec):
00:29:55.452       |  1.00th=[  310],  5.00th=[  310], 10.00th=[  330], 20.00th=[  343],
00:29:55.452       | 30.00th=[  898], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157],
00:29:55.452       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:29:55.452       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:29:55.452       | 99.99th=[41157]
00:29:55.452    write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets
00:29:55.452      slat (nsec): min=6168, max=31971, avg=9299.35, stdev=2613.16
00:29:55.452      clat (usec): min=155, max=445, avg=212.29, stdev=31.13
00:29:55.452       lat (usec): min=164, max=452, avg=221.59, stdev=30.92
00:29:55.452      clat percentiles (usec):
00:29:55.452       |  1.00th=[  169],  5.00th=[  178], 10.00th=[  182], 20.00th=[  192],
00:29:55.452       | 30.00th=[  196], 40.00th=[  202], 50.00th=[  206], 60.00th=[  212],
00:29:55.452       | 70.00th=[  223], 80.00th=[  235], 90.00th=[  243], 95.00th=[  251],
00:29:55.452       | 99.00th=[  343], 99.50th=[  408], 99.90th=[  445], 99.95th=[  445],
00:29:55.452       | 99.99th=[  445]
00:29:55.452     bw (  KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1
00:29:55.452     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:29:55.452    lat (usec)   : 250=89.52%, 500=6.25%, 1000=0.18%
00:29:55.452    lat (msec)   : 50=4.04%
00:29:55.452    cpu          : usr=0.29%, sys=0.69%, ctx=545, majf=0, minf=1
00:29:55.452    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:29:55.452       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.452       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:55.452       issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:55.452       latency   : target=0, window=0, percentile=100.00%, depth=1
00:29:55.452  
00:29:55.452  Run status group 0 (all jobs):
00:29:55.452     READ: bw=377KiB/s (386kB/s), 84.5KiB/s-126KiB/s (86.6kB/s-129kB/s), io=392KiB (401kB), run=1007-1041msec
00:29:55.452    WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2034KiB/s (2015kB/s-2083kB/s), io=8192KiB (8389kB), run=1007-1041msec
00:29:55.452  
00:29:55.452  Disk stats (read/write):
00:29:55.452    nvme0n1: ios=65/512, merge=0/0, ticks=1475/113, in_queue=1588, util=98.20%
00:29:55.452    nvme0n2: ios=42/512, merge=0/0, ticks=1682/116, in_queue=1798, util=98.17%
00:29:55.452    nvme0n3: ios=73/512, merge=0/0, ticks=1478/96, in_queue=1574, util=98.43%
00:29:55.452    nvme0n4: ios=49/512, merge=0/0, ticks=1641/106, in_queue=1747, util=99.58%
00:29:55.452   04:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:29:55.452  [global]
00:29:55.452  thread=1
00:29:55.452  invalidate=1
00:29:55.452  rw=write
00:29:55.452  time_based=1
00:29:55.452  runtime=1
00:29:55.452  ioengine=libaio
00:29:55.452  direct=1
00:29:55.452  bs=4096
00:29:55.452  iodepth=128
00:29:55.452  norandommap=0
00:29:55.452  numjobs=1
00:29:55.452  
00:29:55.452  verify_dump=1
00:29:55.452  verify_backlog=512
00:29:55.452  verify_state_save=0
00:29:55.452  do_verify=1
00:29:55.452  verify=crc32c-intel
00:29:55.452  [job0]
00:29:55.452  filename=/dev/nvme0n1
00:29:55.452  [job1]
00:29:55.452  filename=/dev/nvme0n2
00:29:55.452  [job2]
00:29:55.452  filename=/dev/nvme0n3
00:29:55.452  [job3]
00:29:55.452  filename=/dev/nvme0n4
00:29:55.452  Could not set queue depth (nvme0n1)
00:29:55.452  Could not set queue depth (nvme0n2)
00:29:55.452  Could not set queue depth (nvme0n3)
00:29:55.452  Could not set queue depth (nvme0n4)
00:29:55.711  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:55.711  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:55.711  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:55.711  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:55.711  fio-3.35
00:29:55.711  Starting 4 threads
00:29:57.084  
00:29:57.084  job0: (groupid=0, jobs=1): err= 0: pid=390669: Mon Dec  9 04:20:25 2024
00:29:57.084    read: IOPS=3809, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1007msec)
00:29:57.084      slat (usec): min=2, max=35728, avg=145.58, stdev=1288.24
00:29:57.084      clat (msec): min=2, max=120, avg=18.43, stdev=18.03
00:29:57.084       lat (msec): min=6, max=120, avg=18.57, stdev=18.19
00:29:57.084      clat percentiles (msec):
00:29:57.084       |  1.00th=[    8],  5.00th=[   10], 10.00th=[   11], 20.00th=[   12],
00:29:57.084       | 30.00th=[   12], 40.00th=[   13], 50.00th=[   13], 60.00th=[   14],
00:29:57.084       | 70.00th=[   15], 80.00th=[   16], 90.00th=[   33], 95.00th=[   68],
00:29:57.084       | 99.00th=[   96], 99.50th=[   96], 99.90th=[  105], 99.95th=[  114],
00:29:57.084       | 99.99th=[  121]
00:29:57.084    write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets
00:29:57.084      slat (usec): min=3, max=13695, avg=99.38, stdev=616.83
00:29:57.084      clat (usec): min=5509, max=50557, avg=13830.90, stdev=5340.08
00:29:57.084       lat (usec): min=5526, max=50575, avg=13930.27, stdev=5395.92
00:29:57.084      clat percentiles (usec):
00:29:57.084       |  1.00th=[ 7898],  5.00th=[10421], 10.00th=[10552], 20.00th=[10945],
00:29:57.084       | 30.00th=[11863], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173],
00:29:57.084       | 70.00th=[13566], 80.00th=[13960], 90.00th=[15795], 95.00th=[27132],
00:29:57.084       | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681],
00:29:57.084       | 99.99th=[50594]
00:29:57.085     bw (  KiB/s): min=12280, max=20488, per=25.18%, avg=16384.00, stdev=5803.93, samples=2
00:29:57.085     iops        : min= 3070, max= 5122, avg=4096.00, stdev=1450.98, samples=2
00:29:57.085    lat (msec)   : 4=0.01%, 10=4.90%, 20=86.45%, 50=5.23%, 100=3.35%
00:29:57.085    lat (msec)   : 250=0.05%
00:29:57.085    cpu          : usr=3.48%, sys=5.07%, ctx=342, majf=0, minf=1
00:29:57.085    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:29:57.085       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:57.085       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:57.085       issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:57.085       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:57.085  job1: (groupid=0, jobs=1): err= 0: pid=390670: Mon Dec  9 04:20:25 2024
00:29:57.085    read: IOPS=4189, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1005msec)
00:29:57.085      slat (nsec): min=1924, max=25618k, avg=108988.82, stdev=891904.06
00:29:57.085      clat (usec): min=3360, max=73130, avg=13352.69, stdev=8031.27
00:29:57.085       lat (usec): min=6162, max=73139, avg=13461.68, stdev=8098.53
00:29:57.085      clat percentiles (usec):
00:29:57.085       |  1.00th=[ 6587],  5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9241],
00:29:57.085       | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11469], 60.00th=[11731],
00:29:57.085       | 70.00th=[13304], 80.00th=[15795], 90.00th=[19530], 95.00th=[23462],
00:29:57.085       | 99.00th=[54264], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877],
00:29:57.085       | 99.99th=[72877]
00:29:57.085    write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets
00:29:57.085      slat (usec): min=2, max=14851, avg=113.78, stdev=618.48
00:29:57.085      clat (usec): min=722, max=88234, avg=15483.83, stdev=9662.30
00:29:57.085       lat (usec): min=730, max=88251, avg=15597.60, stdev=9728.20
00:29:57.085      clat percentiles (usec):
00:29:57.085       |  1.00th=[ 7242],  5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896],
00:29:57.085       | 30.00th=[10028], 40.00th=[10945], 50.00th=[11600], 60.00th=[12256],
00:29:57.085       | 70.00th=[12911], 80.00th=[20055], 90.00th=[30540], 95.00th=[35390],
00:29:57.085       | 99.00th=[60556], 99.50th=[61080], 99.90th=[88605], 99.95th=[88605],
00:29:57.085       | 99.99th=[88605]
00:29:57.085     bw (  KiB/s): min=18104, max=18648, per=28.24%, avg=18376.00, stdev=384.67, samples=2
00:29:57.085     iops        : min= 4526, max= 4662, avg=4594.00, stdev=96.17, samples=2
00:29:57.085    lat (usec)   : 750=0.02%
00:29:57.085    lat (msec)   : 4=0.08%, 10=31.74%, 20=52.42%, 50=14.30%, 100=1.44%
00:29:57.085    cpu          : usr=2.49%, sys=3.19%, ctx=520, majf=0, minf=1
00:29:57.085    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:29:57.085       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:57.085       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:57.085       issued rwts: total=4210,4608,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:57.085       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:57.085  job2: (groupid=0, jobs=1): err= 0: pid=390671: Mon Dec  9 04:20:25 2024
00:29:57.085    read: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1005msec)
00:29:57.085      slat (usec): min=2, max=22095, avg=118.36, stdev=991.55
00:29:57.085      clat (usec): min=3521, max=55529, avg=15909.19, stdev=6217.14
00:29:57.085       lat (usec): min=6078, max=55537, avg=16027.54, stdev=6286.43
00:29:57.085      clat percentiles (usec):
00:29:57.085       |  1.00th=[ 6325],  5.00th=[ 8979], 10.00th=[10552], 20.00th=[12125],
00:29:57.085       | 30.00th=[12911], 40.00th=[13173], 50.00th=[13960], 60.00th=[15795],
00:29:57.085       | 70.00th=[17695], 80.00th=[19006], 90.00th=[23725], 95.00th=[27657],
00:29:57.085       | 99.00th=[41157], 99.50th=[51119], 99.90th=[55313], 99.95th=[55313],
00:29:57.085       | 99.99th=[55313]
00:29:57.085    write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets
00:29:57.085      slat (usec): min=3, max=33404, avg=120.86, stdev=953.09
00:29:57.085      clat (usec): min=564, max=55533, avg=16603.85, stdev=8381.65
00:29:57.085       lat (usec): min=1984, max=66254, avg=16724.71, stdev=8457.72
00:29:57.085      clat percentiles (usec):
00:29:57.085       |  1.00th=[ 6063],  5.00th=[ 7898], 10.00th=[ 9896], 20.00th=[11863],
00:29:57.085       | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13960],
00:29:57.085       | 70.00th=[14877], 80.00th=[19006], 90.00th=[33817], 95.00th=[35390],
00:29:57.085       | 99.00th=[36963], 99.50th=[37487], 99.90th=[39584], 99.95th=[43254],
00:29:57.085       | 99.99th=[55313]
00:29:57.085     bw (  KiB/s): min=13448, max=19320, per=25.18%, avg=16384.00, stdev=4152.13, samples=2
00:29:57.085     iops        : min= 3362, max= 4830, avg=4096.00, stdev=1038.03, samples=2
00:29:57.085    lat (usec)   : 750=0.01%
00:29:57.085    lat (msec)   : 4=0.18%, 10=9.67%, 20=71.94%, 50=17.80%, 100=0.40%
00:29:57.085    cpu          : usr=2.29%, sys=4.78%, ctx=353, majf=0, minf=1
00:29:57.085    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:29:57.085       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:57.085       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:57.085       issued rwts: total=3734,4096,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:57.085       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:57.085  job3: (groupid=0, jobs=1): err= 0: pid=390672: Mon Dec  9 04:20:25 2024
00:29:57.085    read: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1005msec)
00:29:57.085      slat (usec): min=2, max=24103, avg=128.23, stdev=983.72
00:29:57.085      clat (usec): min=516, max=74012, avg=16561.47, stdev=8428.65
00:29:57.085       lat (usec): min=4242, max=74057, avg=16689.70, stdev=8488.08
00:29:57.085      clat percentiles (usec):
00:29:57.085       |  1.00th=[ 4817],  5.00th=[ 8094], 10.00th=[ 9241], 20.00th=[11338],
00:29:57.085       | 30.00th=[12649], 40.00th=[13304], 50.00th=[14877], 60.00th=[15533],
00:29:57.085       | 70.00th=[17957], 80.00th=[18744], 90.00th=[24511], 95.00th=[32375],
00:29:57.085       | 99.00th=[52691], 99.50th=[52691], 99.90th=[64226], 99.95th=[64226],
00:29:57.085       | 99.99th=[73925]
00:29:57.085    write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets
00:29:57.085      slat (usec): min=3, max=35128, avg=156.78, stdev=1285.36
00:29:57.085      clat (usec): min=1541, max=75009, avg=21142.43, stdev=12507.34
00:29:57.085       lat (usec): min=3366, max=75032, avg=21299.21, stdev=12583.67
00:29:57.085      clat percentiles (usec):
00:29:57.085       |  1.00th=[ 8848],  5.00th=[11076], 10.00th=[11731], 20.00th=[13042],
00:29:57.085       | 30.00th=[13304], 40.00th=[14484], 50.00th=[15139], 60.00th=[18220],
00:29:57.085       | 70.00th=[20055], 80.00th=[27657], 90.00th=[41681], 95.00th=[54264],
00:29:57.085       | 99.00th=[55837], 99.50th=[57934], 99.90th=[74974], 99.95th=[74974],
00:29:57.085       | 99.99th=[74974]
00:29:57.085     bw (  KiB/s): min=11600, max=16384, per=21.50%, avg=13992.00, stdev=3382.80, samples=2
00:29:57.085     iops        : min= 2900, max= 4096, avg=3498.00, stdev=845.70, samples=2
00:29:57.085    lat (usec)   : 750=0.01%
00:29:57.085    lat (msec)   : 2=0.01%, 4=0.10%, 10=7.42%, 20=67.15%, 50=21.84%
00:29:57.085    lat (msec)   : 100=3.45%
00:29:57.085    cpu          : usr=2.19%, sys=5.28%, ctx=280, majf=0, minf=1
00:29:57.085    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1%
00:29:57.085       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:57.085       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:57.085       issued rwts: total=3114,3584,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:57.085       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:57.085  
00:29:57.085  Run status group 0 (all jobs):
00:29:57.085     READ: bw=57.8MiB/s (60.6MB/s), 12.1MiB/s-16.4MiB/s (12.7MB/s-17.2MB/s), io=58.2MiB (61.0MB), run=1005-1007msec
00:29:57.085    WRITE: bw=63.6MiB/s (66.6MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=64.0MiB (67.1MB), run=1005-1007msec
00:29:57.085  
00:29:57.085  Disk stats (read/write):
00:29:57.085    nvme0n1: ios=3382/3584, merge=0/0, ticks=18615/15523, in_queue=34138, util=86.57%
00:29:57.085    nvme0n2: ios=3646/4096, merge=0/0, ticks=26571/34544, in_queue=61115, util=86.67%
00:29:57.085    nvme0n3: ios=3120/3415, merge=0/0, ticks=37102/45444, in_queue=82546, util=98.64%
00:29:57.085    nvme0n4: ios=2606/3032, merge=0/0, ticks=28576/35167, in_queue=63743, util=98.63%
00:29:57.085   04:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:29:57.085  [global]
00:29:57.085  thread=1
00:29:57.085  invalidate=1
00:29:57.085  rw=randwrite
00:29:57.085  time_based=1
00:29:57.085  runtime=1
00:29:57.085  ioengine=libaio
00:29:57.085  direct=1
00:29:57.085  bs=4096
00:29:57.085  iodepth=128
00:29:57.085  norandommap=0
00:29:57.085  numjobs=1
00:29:57.085  
00:29:57.085  verify_dump=1
00:29:57.085  verify_backlog=512
00:29:57.085  verify_state_save=0
00:29:57.085  do_verify=1
00:29:57.085  verify=crc32c-intel
00:29:57.085  [job0]
00:29:57.085  filename=/dev/nvme0n1
00:29:57.085  [job1]
00:29:57.085  filename=/dev/nvme0n2
00:29:57.085  [job2]
00:29:57.085  filename=/dev/nvme0n3
00:29:57.085  [job3]
00:29:57.085  filename=/dev/nvme0n4
00:29:57.085  Could not set queue depth (nvme0n1)
00:29:57.085  Could not set queue depth (nvme0n2)
00:29:57.085  Could not set queue depth (nvme0n3)
00:29:57.085  Could not set queue depth (nvme0n4)
00:29:57.085  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:57.085  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:57.085  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:57.085  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:29:57.085  fio-3.35
00:29:57.085  Starting 4 threads
00:29:58.460  
00:29:58.460  job0: (groupid=0, jobs=1): err= 0: pid=390896: Mon Dec  9 04:20:26 2024
00:29:58.460    read: IOPS=3152, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1009msec)
00:29:58.460      slat (usec): min=3, max=17170, avg=140.85, stdev=1058.74
00:29:58.460      clat (usec): min=7098, max=44778, avg=16742.43, stdev=6713.04
00:29:58.460       lat (usec): min=7255, max=44786, avg=16883.28, stdev=6793.41
00:29:58.460      clat percentiles (usec):
00:29:58.460       |  1.00th=[ 8160],  5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[11731],
00:29:58.460       | 30.00th=[13042], 40.00th=[13829], 50.00th=[15008], 60.00th=[15664],
00:29:58.460       | 70.00th=[18482], 80.00th=[20055], 90.00th=[25560], 95.00th=[32113],
00:29:58.460       | 99.00th=[40633], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827],
00:29:58.460       | 99.99th=[44827]
00:29:58.460    write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets
00:29:58.460      slat (usec): min=4, max=30901, avg=146.63, stdev=962.07
00:29:58.460      clat (usec): min=4642, max=53258, avg=20832.64, stdev=10146.69
00:29:58.460       lat (usec): min=4649, max=53310, avg=20979.27, stdev=10226.47
00:29:58.460      clat percentiles (usec):
00:29:58.460       |  1.00th=[ 7177],  5.00th=[10421], 10.00th=[10945], 20.00th=[12387],
00:29:58.460       | 30.00th=[12780], 40.00th=[13566], 50.00th=[14615], 60.00th=[21627],
00:29:58.460       | 70.00th=[32113], 80.00th=[33817], 90.00th=[35390], 95.00th=[36439],
00:29:58.460       | 99.00th=[36963], 99.50th=[40633], 99.90th=[43779], 99.95th=[44827],
00:29:58.460       | 99.99th=[53216]
00:29:58.460     bw (  KiB/s): min=12632, max=15888, per=20.59%, avg=14260.00, stdev=2302.34, samples=2
00:29:58.460     iops        : min= 3158, max= 3972, avg=3565.00, stdev=575.58, samples=2
00:29:58.460    lat (msec)   : 10=6.78%, 20=61.18%, 50=32.02%, 100=0.01%
00:29:58.460    cpu          : usr=3.67%, sys=5.26%, ctx=333, majf=0, minf=1
00:29:58.460    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1%
00:29:58.460       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:58.460       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:58.460       issued rwts: total=3181,3584,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:58.460       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:58.460  job1: (groupid=0, jobs=1): err= 0: pid=390897: Mon Dec  9 04:20:26 2024
00:29:58.460    read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec)
00:29:58.460      slat (usec): min=2, max=9685, avg=85.93, stdev=618.75
00:29:58.460      clat (usec): min=3196, max=23295, avg=11416.87, stdev=2485.17
00:29:58.460       lat (usec): min=3199, max=23304, avg=11502.80, stdev=2521.19
00:29:58.460      clat percentiles (usec):
00:29:58.460       |  1.00th=[ 7046],  5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[ 9896],
00:29:58.460       | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[11207],
00:29:58.460       | 70.00th=[11600], 80.00th=[12649], 90.00th=[14484], 95.00th=[16909],
00:29:58.460       | 99.00th=[20579], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200],
00:29:58.460       | 99.99th=[23200]
00:29:58.460    write: IOPS=5838, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1002msec); 0 zone resets
00:29:58.460      slat (usec): min=3, max=9473, avg=80.84, stdev=607.42
00:29:58.460      clat (usec): min=569, max=21025, avg=10730.22, stdev=1502.19
00:29:58.460       lat (usec): min=687, max=21063, avg=10811.05, stdev=1585.08
00:29:58.460      clat percentiles (usec):
00:29:58.460       |  1.00th=[ 5604],  5.00th=[ 8094], 10.00th=[ 9896], 20.00th=[10290],
00:29:58.460       | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814],
00:29:58.460       | 70.00th=[10945], 80.00th=[11207], 90.00th=[11863], 95.00th=[13829],
00:29:58.460       | 99.00th=[15795], 99.50th=[15795], 99.90th=[17957], 99.95th=[20055],
00:29:58.460       | 99.99th=[21103]
00:29:58.460     bw (  KiB/s): min=22160, max=23624, per=33.05%, avg=22892.00, stdev=1035.20, samples=2
00:29:58.460     iops        : min= 5540, max= 5906, avg=5723.00, stdev=258.80, samples=2
00:29:58.460    lat (usec)   : 750=0.02%
00:29:58.460    lat (msec)   : 4=0.24%, 10=17.41%, 20=81.54%, 50=0.79%
00:29:58.460    cpu          : usr=5.39%, sys=6.79%, ctx=263, majf=0, minf=1
00:29:58.460    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5%
00:29:58.460       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:58.460       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:58.460       issued rwts: total=5632,5850,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:58.460       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:58.460  job2: (groupid=0, jobs=1): err= 0: pid=390898: Mon Dec  9 04:20:26 2024
00:29:58.460    read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec)
00:29:58.460      slat (usec): min=2, max=7314, avg=104.48, stdev=562.25
00:29:58.460      clat (usec): min=8109, max=26261, avg=13312.75, stdev=1920.87
00:29:58.460       lat (usec): min=8565, max=26268, avg=13417.23, stdev=1960.95
00:29:58.461      clat percentiles (usec):
00:29:58.461       |  1.00th=[ 9634],  5.00th=[10421], 10.00th=[11076], 20.00th=[12125],
00:29:58.461       | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435],
00:29:58.461       | 70.00th=[13960], 80.00th=[14484], 90.00th=[15270], 95.00th=[16450],
00:29:58.461       | 99.00th=[19792], 99.50th=[20841], 99.90th=[22938], 99.95th=[26346],
00:29:58.461       | 99.99th=[26346]
00:29:58.461    write: IOPS=5103, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets
00:29:58.461      slat (usec): min=3, max=4113, avg=95.06, stdev=421.61
00:29:58.461      clat (usec): min=411, max=20197, avg=12785.23, stdev=1664.36
00:29:58.461       lat (usec): min=4430, max=20203, avg=12880.29, stdev=1680.73
00:29:58.461      clat percentiles (usec):
00:29:58.461       |  1.00th=[ 7898],  5.00th=[10421], 10.00th=[11600], 20.00th=[11994],
00:29:58.461       | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911],
00:29:58.461       | 70.00th=[13042], 80.00th=[13698], 90.00th=[14746], 95.00th=[15533],
00:29:58.461       | 99.00th=[18220], 99.50th=[19006], 99.90th=[19530], 99.95th=[19792],
00:29:58.461       | 99.99th=[20317]
00:29:58.461     bw (  KiB/s): min=19584, max=20344, per=28.82%, avg=19964.00, stdev=537.40, samples=2
00:29:58.461     iops        : min= 4896, max= 5086, avg=4991.00, stdev=134.35, samples=2
00:29:58.461    lat (usec)   : 500=0.01%
00:29:58.461    lat (msec)   : 10=3.21%, 20=96.32%, 50=0.46%
00:29:58.461    cpu          : usr=3.39%, sys=7.49%, ctx=580, majf=0, minf=1
00:29:58.461    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:29:58.461       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:58.461       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:58.461       issued rwts: total=4608,5119,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:58.461       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:58.461  job3: (groupid=0, jobs=1): err= 0: pid=390899: Mon Dec  9 04:20:26 2024
00:29:58.461    read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec)
00:29:58.461      slat (usec): min=2, max=24463, avg=176.86, stdev=1251.22
00:29:58.461      clat (usec): min=5146, max=74593, avg=24635.03, stdev=14436.08
00:29:58.461       lat (usec): min=5151, max=74620, avg=24811.90, stdev=14507.98
00:29:58.461      clat percentiles (usec):
00:29:58.461       |  1.00th=[ 5145],  5.00th=[11076], 10.00th=[15008], 20.00th=[17171],
00:29:58.461       | 30.00th=[17695], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006],
00:29:58.461       | 70.00th=[23200], 80.00th=[31327], 90.00th=[46924], 95.00th=[64750],
00:29:58.461       | 99.00th=[72877], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974],
00:29:58.461       | 99.99th=[74974]
00:29:58.461    write: IOPS=2892, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1009msec); 0 zone resets
00:29:58.461      slat (usec): min=3, max=24361, avg=181.82, stdev=1289.62
00:29:58.461      clat (usec): min=641, max=72904, avg=22246.64, stdev=13408.84
00:29:58.461       lat (usec): min=708, max=72911, avg=22428.46, stdev=13494.04
00:29:58.461      clat percentiles (usec):
00:29:58.461       |  1.00th=[ 7373],  5.00th=[10814], 10.00th=[13566], 20.00th=[14877],
00:29:58.461       | 30.00th=[16581], 40.00th=[17433], 50.00th=[17957], 60.00th=[19006],
00:29:58.461       | 70.00th=[19792], 80.00th=[23200], 90.00th=[39584], 95.00th=[56361],
00:29:58.461       | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877],
00:29:58.461       | 99.99th=[72877]
00:29:58.461     bw (  KiB/s): min=10040, max=12288, per=16.12%, avg=11164.00, stdev=1589.58, samples=2
00:29:58.461     iops        : min= 2510, max= 3072, avg=2791.00, stdev=397.39, samples=2
00:29:58.461    lat (usec)   : 750=0.05%
00:29:58.461    lat (msec)   : 10=2.87%, 20=66.03%, 50=23.76%, 100=7.28%
00:29:58.461    cpu          : usr=2.18%, sys=3.27%, ctx=163, majf=0, minf=1
00:29:58.461    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9%
00:29:58.461       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:29:58.461       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:29:58.461       issued rwts: total=2560,2919,0,0 short=0,0,0,0 dropped=0,0,0,0
00:29:58.461       latency   : target=0, window=0, percentile=100.00%, depth=128
00:29:58.461  
00:29:58.461  Run status group 0 (all jobs):
00:29:58.461     READ: bw=61.9MiB/s (64.9MB/s), 9.91MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=62.4MiB (65.5MB), run=1002-1009msec
00:29:58.461    WRITE: bw=67.6MiB/s (70.9MB/s), 11.3MiB/s-22.8MiB/s (11.8MB/s-23.9MB/s), io=68.2MiB (71.6MB), run=1002-1009msec
00:29:58.461  
00:29:58.461  Disk stats (read/write):
00:29:58.461    nvme0n1: ios=2901/3072, merge=0/0, ticks=47303/57217, in_queue=104520, util=97.19%
00:29:58.461    nvme0n2: ios=4649/5120, merge=0/0, ticks=32894/33917, in_queue=66811, util=96.95%
00:29:58.461    nvme0n3: ios=4143/4135, merge=0/0, ticks=18306/16534, in_queue=34840, util=96.15%
00:29:58.461    nvme0n4: ios=2048/2406, merge=0/0, ticks=25460/23144, in_queue=48604, util=89.62%
00:29:58.461   04:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync
00:29:58.461   04:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=391034
00:29:58.461   04:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:29:58.461   04:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3
00:29:58.461  [global]
00:29:58.461  thread=1
00:29:58.461  invalidate=1
00:29:58.461  rw=read
00:29:58.461  time_based=1
00:29:58.461  runtime=10
00:29:58.461  ioengine=libaio
00:29:58.461  direct=1
00:29:58.461  bs=4096
00:29:58.461  iodepth=1
00:29:58.461  norandommap=1
00:29:58.461  numjobs=1
00:29:58.461  
00:29:58.461  [job0]
00:29:58.461  filename=/dev/nvme0n1
00:29:58.461  [job1]
00:29:58.461  filename=/dev/nvme0n2
00:29:58.461  [job2]
00:29:58.461  filename=/dev/nvme0n3
00:29:58.461  [job3]
00:29:58.461  filename=/dev/nvme0n4
00:29:58.461  Could not set queue depth (nvme0n1)
00:29:58.461  Could not set queue depth (nvme0n2)
00:29:58.461  Could not set queue depth (nvme0n3)
00:29:58.461  Could not set queue depth (nvme0n4)
00:29:58.461  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:58.461  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:58.461  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:58.461  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:29:58.461  fio-3.35
00:29:58.461  Starting 4 threads
00:30:01.744   04:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0
00:30:01.744   04:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0
00:30:01.744  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3805184, buflen=4096
00:30:01.744  fio: pid=391155, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:30:02.002   04:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:30:02.002   04:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:30:02.002  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44015616, buflen=4096
00:30:02.002  fio: pid=391149, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:30:02.260   04:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:30:02.260   04:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:30:02.260  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45961216, buflen=4096
00:30:02.260  fio: pid=391128, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:30:02.518   04:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:30:02.518   04:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:30:02.518  fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=38424576, buflen=4096
00:30:02.518  fio: pid=391133, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:30:02.518  
00:30:02.518  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=391128: Mon Dec  9 04:20:31 2024
00:30:02.518    read: IOPS=3110, BW=12.1MiB/s (12.7MB/s)(43.8MiB/3608msec)
00:30:02.518      slat (usec): min=4, max=11703, avg=12.11, stdev=123.43
00:30:02.518      clat (usec): min=189, max=42011, avg=304.34, stdev=1239.52
00:30:02.518       lat (usec): min=195, max=42025, avg=316.44, stdev=1246.00
00:30:02.518      clat percentiles (usec):
00:30:02.518       |  1.00th=[  200],  5.00th=[  208], 10.00th=[  212], 20.00th=[  219],
00:30:02.518       | 30.00th=[  225], 40.00th=[  231], 50.00th=[  241], 60.00th=[  258],
00:30:02.518       | 70.00th=[  281], 80.00th=[  306], 90.00th=[  338], 95.00th=[  392],
00:30:02.518       | 99.00th=[  553], 99.50th=[  578], 99.90th=[12518], 99.95th=[41157],
00:30:02.518       | 99.99th=[42206]
00:30:02.518     bw (  KiB/s): min= 6304, max=16632, per=38.16%, avg=12649.14, stdev=3337.29, samples=7
00:30:02.518     iops        : min= 1576, max= 4158, avg=3162.29, stdev=834.32, samples=7
00:30:02.518    lat (usec)   : 250=56.59%, 500=41.42%, 750=1.84%, 1000=0.01%
00:30:02.518    lat (msec)   : 2=0.02%, 20=0.02%, 50=0.09%
00:30:02.518    cpu          : usr=2.16%, sys=4.44%, ctx=11227, majf=0, minf=2
00:30:02.518    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:30:02.518       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.518       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.518       issued rwts: total=11222,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:30:02.518       latency   : target=0, window=0, percentile=100.00%, depth=1
00:30:02.518  job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=391133: Mon Dec  9 04:20:31 2024
00:30:02.518    read: IOPS=2408, BW=9634KiB/s (9865kB/s)(36.6MiB/3895msec)
00:30:02.518      slat (usec): min=3, max=13862, avg=12.25, stdev=143.11
00:30:02.518      clat (usec): min=181, max=42120, avg=397.54, stdev=2211.88
00:30:02.518       lat (usec): min=185, max=55982, avg=409.80, stdev=2244.36
00:30:02.518      clat percentiles (usec):
00:30:02.518       |  1.00th=[  208],  5.00th=[  210], 10.00th=[  215], 20.00th=[  221],
00:30:02.518       | 30.00th=[  233], 40.00th=[  249], 50.00th=[  265], 60.00th=[  281],
00:30:02.518       | 70.00th=[  293], 80.00th=[  314], 90.00th=[  355], 95.00th=[  404],
00:30:02.518       | 99.00th=[  502], 99.50th=[  570], 99.90th=[41681], 99.95th=[42206],
00:30:02.518       | 99.99th=[42206]
00:30:02.518     bw (  KiB/s): min=  113, max=14216, per=32.31%, avg=10709.86, stdev=4954.57, samples=7
00:30:02.518     iops        : min=   28, max= 3554, avg=2677.43, stdev=1238.73, samples=7
00:30:02.518    lat (usec)   : 250=40.45%, 500=58.49%, 750=0.67%, 1000=0.01%
00:30:02.518    lat (msec)   : 2=0.04%, 10=0.01%, 20=0.02%, 50=0.29%
00:30:02.518    cpu          : usr=1.87%, sys=3.80%, ctx=9384, majf=0, minf=1
00:30:02.518    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:30:02.518       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.518       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.518       issued rwts: total=9382,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:30:02.518       latency   : target=0, window=0, percentile=100.00%, depth=1
00:30:02.518  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=391149: Mon Dec  9 04:20:31 2024
00:30:02.518    read: IOPS=3286, BW=12.8MiB/s (13.5MB/s)(42.0MiB/3270msec)
00:30:02.518      slat (nsec): min=5286, max=55162, avg=10496.80, stdev=5288.92
00:30:02.518      clat (usec): min=192, max=40566, avg=289.02, stdev=578.30
00:30:02.518       lat (usec): min=202, max=40572, avg=299.51, stdev=578.40
00:30:02.518      clat percentiles (usec):
00:30:02.518       |  1.00th=[  231],  5.00th=[  237], 10.00th=[  241], 20.00th=[  247],
00:30:02.518       | 30.00th=[  253], 40.00th=[  260], 50.00th=[  269], 60.00th=[  281],
00:30:02.518       | 70.00th=[  289], 80.00th=[  302], 90.00th=[  322], 95.00th=[  343],
00:30:02.518       | 99.00th=[  453], 99.50th=[  498], 99.90th=[  914], 99.95th=[ 5276],
00:30:02.518       | 99.99th=[40633]
00:30:02.518     bw (  KiB/s): min=11112, max=15024, per=39.86%, avg=13212.00, stdev=1303.37, samples=6
00:30:02.518     iops        : min= 2778, max= 3756, avg=3303.00, stdev=325.84, samples=6
00:30:02.518    lat (usec)   : 250=24.67%, 500=74.90%, 750=0.32%, 1000=0.04%
00:30:02.518    lat (msec)   : 4=0.01%, 10=0.04%, 20=0.01%, 50=0.02%
00:30:02.518    cpu          : usr=2.05%, sys=5.51%, ctx=10748, majf=0, minf=1
00:30:02.518    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:30:02.518       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.518       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.518       issued rwts: total=10747,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:30:02.518       latency   : target=0, window=0, percentile=100.00%, depth=1
00:30:02.518  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=391155: Mon Dec  9 04:20:31 2024
00:30:02.518    read: IOPS=311, BW=1245KiB/s (1275kB/s)(3716KiB/2985msec)
00:30:02.518      slat (nsec): min=6597, max=43906, avg=7760.45, stdev=2060.58
00:30:02.518      clat (usec): min=211, max=42002, avg=3174.53, stdev=10483.37
00:30:02.518       lat (usec): min=219, max=42013, avg=3182.28, stdev=10484.37
00:30:02.518      clat percentiles (usec):
00:30:02.518       |  1.00th=[  217],  5.00th=[  223], 10.00th=[  227], 20.00th=[  235],
00:30:02.518       | 30.00th=[  243], 40.00th=[  265], 50.00th=[  285], 60.00th=[  293],
00:30:02.518       | 70.00th=[  306], 80.00th=[  318], 90.00th=[  396], 95.00th=[41157],
00:30:02.518       | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206],
00:30:02.519       | 99.99th=[42206]
00:30:02.519     bw (  KiB/s): min=  104, max= 2648, per=4.43%, avg=1467.20, stdev=1163.09, samples=5
00:30:02.519     iops        : min=   26, max=  662, avg=366.80, stdev=290.77, samples=5
00:30:02.519    lat (usec)   : 250=34.19%, 500=58.60%
00:30:02.519    lat (msec)   : 50=7.10%
00:30:02.519    cpu          : usr=0.13%, sys=0.37%, ctx=932, majf=0, minf=1
00:30:02.519    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:30:02.519       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.519       complete  : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:02.519       issued rwts: total=930,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:30:02.519       latency   : target=0, window=0, percentile=100.00%, depth=1
00:30:02.519  
00:30:02.519  Run status group 0 (all jobs):
00:30:02.519     READ: bw=32.4MiB/s (33.9MB/s), 1245KiB/s-12.8MiB/s (1275kB/s-13.5MB/s), io=126MiB (132MB), run=2985-3895msec
00:30:02.519  
00:30:02.519  Disk stats (read/write):
00:30:02.519    nvme0n1: ios=11244/0, merge=0/0, ticks=3455/0, in_queue=3455, util=99.52%
00:30:02.519    nvme0n2: ios=9380/0, merge=0/0, ticks=3559/0, in_queue=3559, util=96.38%
00:30:02.519    nvme0n3: ios=10194/0, merge=0/0, ticks=2864/0, in_queue=2864, util=96.76%
00:30:02.519    nvme0n4: ios=952/0, merge=0/0, ticks=2998/0, in_queue=2998, util=100.00%
00:30:02.776   04:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:30:02.777   04:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:30:03.035   04:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:30:03.035   04:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:30:03.293   04:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:30:03.293   04:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:30:03.552   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:30:03.552   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:30:03.810   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0
00:30:03.810   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 391034
00:30:03.810   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4
00:30:03.810   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:30:04.068  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:30:04.068   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:30:04.068   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0
00:30:04.068   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:30:04.068   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:30:04.069   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:30:04.069   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:30:04.069   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0
00:30:04.069   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:30:04.069   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:30:04.069  nvmf hotplug test: fio failed as expected
00:30:04.069   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:30:04.327  rmmod nvme_tcp
00:30:04.327  rmmod nvme_fabrics
00:30:04.327  rmmod nvme_keyring
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 389141 ']'
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 389141
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 389141 ']'
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 389141
00:30:04.327    04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:04.327    04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 389141
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 389141'
00:30:04.327  killing process with pid 389141
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 389141
00:30:04.327   04:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 389141
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:04.587   04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:04.587    04:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:30:07.133  
00:30:07.133  real	0m23.911s
00:30:07.133  user	1m7.625s
00:30:07.133  sys	0m10.602s
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:30:07.133  ************************************
00:30:07.133  END TEST nvmf_fio_target
00:30:07.133  ************************************
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:30:07.133  ************************************
00:30:07.133  START TEST nvmf_bdevio
00:30:07.133  ************************************
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode
00:30:07.133  * Looking for test storage...
00:30:07.133  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-:
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-:
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<'
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:07.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:07.133  		--rc genhtml_branch_coverage=1
00:30:07.133  		--rc genhtml_function_coverage=1
00:30:07.133  		--rc genhtml_legend=1
00:30:07.133  		--rc geninfo_all_blocks=1
00:30:07.133  		--rc geninfo_unexecuted_blocks=1
00:30:07.133  		
00:30:07.133  		'
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:07.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:07.133  		--rc genhtml_branch_coverage=1
00:30:07.133  		--rc genhtml_function_coverage=1
00:30:07.133  		--rc genhtml_legend=1
00:30:07.133  		--rc geninfo_all_blocks=1
00:30:07.133  		--rc geninfo_unexecuted_blocks=1
00:30:07.133  		
00:30:07.133  		'
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:07.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:07.133  		--rc genhtml_branch_coverage=1
00:30:07.133  		--rc genhtml_function_coverage=1
00:30:07.133  		--rc genhtml_legend=1
00:30:07.133  		--rc geninfo_all_blocks=1
00:30:07.133  		--rc geninfo_unexecuted_blocks=1
00:30:07.133  		
00:30:07.133  		'
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:07.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:07.133  		--rc genhtml_branch_coverage=1
00:30:07.133  		--rc genhtml_function_coverage=1
00:30:07.133  		--rc genhtml_legend=1
00:30:07.133  		--rc geninfo_all_blocks=1
00:30:07.133  		--rc geninfo_unexecuted_blocks=1
00:30:07.133  		
00:30:07.133  		'
00:30:07.133   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:30:07.133    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:07.133     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:07.134     04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:07.134      04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:07.134      04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:07.134      04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:07.134      04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH
00:30:07.134      04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:07.134    04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable
00:30:07.134   04:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=()
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=()
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=()
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers
00:30:09.039   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=()
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=()
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=()
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=()
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:30:09.040  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:30:09.040  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:30:09.040  Found net devices under 0000:0a:00.0: cvl_0_0
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:30:09.040  Found net devices under 0000:0a:00.1: cvl_0_1
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:30:09.040  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:30:09.040  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms
00:30:09.040  
00:30:09.040  --- 10.0.0.2 ping statistics ---
00:30:09.040  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:09.040  rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms
00:30:09.040   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:30:09.299  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:30:09.299  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms
00:30:09.299  
00:30:09.299  --- 10.0.0.1 ping statistics ---
00:30:09.299  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:09.299  rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=393877
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 393877
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 393877 ']'
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:09.299  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:09.299   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.299  [2024-12-09 04:20:37.702519] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:30:09.299  [2024-12-09 04:20:37.703683] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:30:09.299  [2024-12-09 04:20:37.703745] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:30:09.299  [2024-12-09 04:20:37.778089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:30:09.299  [2024-12-09 04:20:37.838733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:30:09.299  [2024-12-09 04:20:37.838813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:30:09.300  [2024-12-09 04:20:37.838828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:30:09.300  [2024-12-09 04:20:37.838840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:30:09.300  [2024-12-09 04:20:37.838850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:30:09.300  [2024-12-09 04:20:37.840504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:30:09.300  [2024-12-09 04:20:37.840569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:30:09.300  [2024-12-09 04:20:37.840613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:30:09.300  [2024-12-09 04:20:37.840616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:30:09.559  [2024-12-09 04:20:37.938375] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:30:09.559  [2024-12-09 04:20:37.938591] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:30:09.559  [2024-12-09 04:20:37.938908] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:30:09.559  [2024-12-09 04:20:37.939627] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:30:09.559  [2024-12-09 04:20:37.939840] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:09.559   04:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.559  [2024-12-09 04:20:37.985425] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.559  Malloc0
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:09.559  [2024-12-09 04:20:38.057605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:09.559   04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:30:09.559    04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:30:09.559    04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=()
00:30:09.559    04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config
00:30:09.559    04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:30:09.559    04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:30:09.559  {
00:30:09.559    "params": {
00:30:09.559      "name": "Nvme$subsystem",
00:30:09.559      "trtype": "$TEST_TRANSPORT",
00:30:09.560      "traddr": "$NVMF_FIRST_TARGET_IP",
00:30:09.560      "adrfam": "ipv4",
00:30:09.560      "trsvcid": "$NVMF_PORT",
00:30:09.560      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:30:09.560      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:30:09.560      "hdgst": ${hdgst:-false},
00:30:09.560      "ddgst": ${ddgst:-false}
00:30:09.560    },
00:30:09.560    "method": "bdev_nvme_attach_controller"
00:30:09.560  }
00:30:09.560  EOF
00:30:09.560  )")
00:30:09.560     04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat
00:30:09.560    04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq .
00:30:09.560     04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=,
00:30:09.560     04:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:30:09.560    "params": {
00:30:09.560      "name": "Nvme1",
00:30:09.560      "trtype": "tcp",
00:30:09.560      "traddr": "10.0.0.2",
00:30:09.560      "adrfam": "ipv4",
00:30:09.560      "trsvcid": "4420",
00:30:09.560      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:30:09.560      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:30:09.560      "hdgst": false,
00:30:09.560      "ddgst": false
00:30:09.560    },
00:30:09.560    "method": "bdev_nvme_attach_controller"
00:30:09.560  }'
00:30:09.560  [2024-12-09 04:20:38.109427] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:30:09.560  [2024-12-09 04:20:38.109499] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393900 ]
00:30:09.818  [2024-12-09 04:20:38.179086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:30:09.818  [2024-12-09 04:20:38.243717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:30:09.818  [2024-12-09 04:20:38.243767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:30:09.818  [2024-12-09 04:20:38.243771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:10.077  I/O targets:
00:30:10.077    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:30:10.077  
00:30:10.077  
00:30:10.077       CUnit - A unit testing framework for C - Version 2.1-3
00:30:10.077       http://cunit.sourceforge.net/
00:30:10.077  
00:30:10.077  
00:30:10.077  Suite: bdevio tests on: Nvme1n1
00:30:10.077    Test: blockdev write read block ...passed
00:30:10.077    Test: blockdev write zeroes read block ...passed
00:30:10.077    Test: blockdev write zeroes read no split ...passed
00:30:10.077    Test: blockdev write zeroes read split ...passed
00:30:10.334    Test: blockdev write zeroes read split partial ...passed
00:30:10.334    Test: blockdev reset ...[2024-12-09 04:20:38.683186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:30:10.334  [2024-12-09 04:20:38.683307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14518c0 (9): Bad file descriptor
00:30:10.334  [2024-12-09 04:20:38.687524] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:30:10.334  passed
00:30:10.334    Test: blockdev write read 8 blocks ...passed
00:30:10.334    Test: blockdev write read size > 128k ...passed
00:30:10.334    Test: blockdev write read invalid size ...passed
00:30:10.334    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:30:10.334    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:30:10.334    Test: blockdev write read max offset ...passed
00:30:10.334    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:30:10.334    Test: blockdev writev readv 8 blocks ...passed
00:30:10.592    Test: blockdev writev readv 30 x 1block ...passed
00:30:10.592    Test: blockdev writev readv block ...passed
00:30:10.592    Test: blockdev writev readv size > 128k ...passed
00:30:10.592    Test: blockdev writev readv size > 128k in two iovs ...passed
00:30:10.592    Test: blockdev comparev and writev ...[2024-12-09 04:20:38.979806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.979842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:38.979885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.979904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:38.980303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.980329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:38.980351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.980368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:38.980770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.980795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:38.980817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.980833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:38.981223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.981247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:38.981281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:30:10.592  [2024-12-09 04:20:38.981300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:30:10.592  passed
00:30:10.592    Test: blockdev nvme passthru rw ...passed
00:30:10.592    Test: blockdev nvme passthru vendor specific ...[2024-12-09 04:20:39.064547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:30:10.592  [2024-12-09 04:20:39.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:39.064725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:30:10.592  [2024-12-09 04:20:39.064749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:39.064887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:30:10.592  [2024-12-09 04:20:39.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:30:10.592  [2024-12-09 04:20:39.065055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:30:10.592  [2024-12-09 04:20:39.065077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:30:10.592  passed
00:30:10.592    Test: blockdev nvme admin passthru ...passed
00:30:10.592    Test: blockdev copy ...passed
00:30:10.592  
00:30:10.592  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:30:10.592                suites      1      1    n/a      0        0
00:30:10.592                 tests     23     23     23      0        0
00:30:10.592               asserts    152    152    152      0      n/a
00:30:10.592  
00:30:10.592  Elapsed time =    1.178 seconds
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20}
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:30:10.850  rmmod nvme_tcp
00:30:10.850  rmmod nvme_fabrics
00:30:10.850  rmmod nvme_keyring
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 393877 ']'
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 393877
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 393877 ']'
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 393877
00:30:10.850    04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:10.850    04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393877
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:30:10.850   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:30:10.851   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393877'
00:30:10.851  killing process with pid 393877
00:30:10.851   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 393877
00:30:10.851   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 393877
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:11.109   04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:11.109    04:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:13.647   04:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:30:13.647  
00:30:13.647  real	0m6.532s
00:30:13.647  user	0m8.983s
00:30:13.647  sys	0m2.538s
00:30:13.647   04:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:13.647   04:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:30:13.647  ************************************
00:30:13.647  END TEST nvmf_bdevio
00:30:13.647  ************************************
00:30:13.647   04:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:30:13.647  
00:30:13.647  real	3m55.437s
00:30:13.647  user	8m52.209s
00:30:13.647  sys	1m26.035s
00:30:13.647   04:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:13.647   04:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:30:13.647  ************************************
00:30:13.647  END TEST nvmf_target_core_interrupt_mode
00:30:13.647  ************************************
00:30:13.647   04:20:41 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode
00:30:13.647   04:20:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:30:13.647   04:20:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:13.647   04:20:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:13.647  ************************************
00:30:13.647  START TEST nvmf_interrupt
00:30:13.647  ************************************
00:30:13.647   04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode
00:30:13.647  * Looking for test storage...
00:30:13.647  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:13.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:13.647  		--rc genhtml_branch_coverage=1
00:30:13.647  		--rc genhtml_function_coverage=1
00:30:13.647  		--rc genhtml_legend=1
00:30:13.647  		--rc geninfo_all_blocks=1
00:30:13.647  		--rc geninfo_unexecuted_blocks=1
00:30:13.647  		
00:30:13.647  		'
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:13.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:13.647  		--rc genhtml_branch_coverage=1
00:30:13.647  		--rc genhtml_function_coverage=1
00:30:13.647  		--rc genhtml_legend=1
00:30:13.647  		--rc geninfo_all_blocks=1
00:30:13.647  		--rc geninfo_unexecuted_blocks=1
00:30:13.647  		
00:30:13.647  		'
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:13.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:13.647  		--rc genhtml_branch_coverage=1
00:30:13.647  		--rc genhtml_function_coverage=1
00:30:13.647  		--rc genhtml_legend=1
00:30:13.647  		--rc geninfo_all_blocks=1
00:30:13.647  		--rc geninfo_unexecuted_blocks=1
00:30:13.647  		
00:30:13.647  		'
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:13.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:13.647  		--rc genhtml_branch_coverage=1
00:30:13.647  		--rc genhtml_function_coverage=1
00:30:13.647  		--rc genhtml_legend=1
00:30:13.647  		--rc geninfo_all_blocks=1
00:30:13.647  		--rc geninfo_unexecuted_blocks=1
00:30:13.647  		
00:30:13.647  		'
00:30:13.647   04:20:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:30:13.647     04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:30:13.647    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:30:13.648     04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:30:13.648     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob
00:30:13.648     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:13.648     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:13.648     04:20:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:13.648      04:20:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:13.648      04:20:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:13.648      04:20:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:13.648      04:20:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH
00:30:13.648      04:20:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:30:13.648    04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable
00:30:13.648   04:20:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=()
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=()
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=()
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=()
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=()
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=()
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=()
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:30:15.553   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:30:15.554  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:30:15.554  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:30:15.554  Found net devices under 0000:0a:00.0: cvl_0_0
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:30:15.554  Found net devices under 0000:0a:00.1: cvl_0_1
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:30:15.554   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:30:15.813  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:30:15.813  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms
00:30:15.813  
00:30:15.813  --- 10.0.0.2 ping statistics ---
00:30:15.813  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:15.813  rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:30:15.813  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:30:15.813  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms
00:30:15.813  
00:30:15.813  --- 10.0.0.1 ping statistics ---
00:30:15.813  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:15.813  rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3
00:30:15.813   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=396085
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 396085
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 396085 ']'
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:15.814  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:15.814   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:15.814  [2024-12-09 04:20:44.267904] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:30:15.814  [2024-12-09 04:20:44.268968] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:30:15.814  [2024-12-09 04:20:44.269029] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:30:15.814  [2024-12-09 04:20:44.340360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:30:16.075  [2024-12-09 04:20:44.397242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:30:16.075  [2024-12-09 04:20:44.397323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:30:16.075  [2024-12-09 04:20:44.397338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:30:16.075  [2024-12-09 04:20:44.397350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:30:16.075  [2024-12-09 04:20:44.397360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:30:16.075  [2024-12-09 04:20:44.398870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:30:16.075  [2024-12-09 04:20:44.398875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:16.075  [2024-12-09 04:20:44.485678] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:30:16.075  [2024-12-09 04:20:44.485684] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:30:16.075  [2024-12-09 04:20:44.485929] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio
00:30:16.075    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000
00:30:16.075  5000+0 records in
00:30:16.075  5000+0 records out
00:30:16.075  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0134535 s, 761 MB/s
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:16.075  AIO0
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:16.075  [2024-12-09 04:20:44.575503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:16.075  [2024-12-09 04:20:44.603703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1}
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 396085 0
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 396085 0 idle
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:16.075   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:16.075    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:16.075    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396085 root      20   0  128.2g  47232  34560 S   0.0   0.1   0:00.27 reactor_0'
00:30:16.336    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396085 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.27 reactor_0
00:30:16.336    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:16.336    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1}
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 396085 1
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 396085 1 idle
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:16.336   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:16.336    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:16.336    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396113 root      20   0  128.2g  47232  34560 S   0.0   0.1   0:00.00 reactor_1'
00:30:16.595    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396113 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.00 reactor_1
00:30:16.595    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:16.595    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=396155
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1}
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 396085 0
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 396085 0 busy
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:30:16.595   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:30:16.596   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30
00:30:16.596   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:16.596   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:30:16.596   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:16.596   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:16.596   04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:16.596    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:16.596    04:20:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396085 root      20   0  128.2g  48000  34560 R  99.9   0.1   0:00.47 reactor_0'
00:30:16.596    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396085 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:00.47 reactor_0
00:30:16.596    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:16.596    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1}
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 396085 1
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 396085 1 busy
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:16.596   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:16.596    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:16.596    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:30:16.853   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396113 root      20   0  128.2g  48000  34560 R  99.9   0.1   0:00.26 reactor_1'
00:30:16.854    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396113 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:00.26 reactor_1
00:30:16.854    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:16.854    04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:16.854   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:30:16.854   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:30:16.854   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:30:16.854   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:30:16.854   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:30:16.854   04:20:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:16.854   04:20:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 396155
00:30:26.822  Initializing NVMe Controllers
00:30:26.822  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:30:26.822  Controller IO queue size 256, less than required.
00:30:26.822  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:30:26.822  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:30:26.822  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:30:26.822  Initialization complete. Launching workers.
00:30:26.822  ========================================================
00:30:26.822                                                                                                               Latency(us)
00:30:26.822  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:30:26.822  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:   13754.89      53.73   18623.69    4476.27   22804.97
00:30:26.822  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:   13575.29      53.03   18870.57    4180.37   27103.99
00:30:26.822  ========================================================
00:30:26.822  Total                                                                    :   27330.17     106.76   18746.32    4180.37   27103.99
00:30:26.822  
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1}
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 396085 0
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 396085 0 idle
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:26.822    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:26.822    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396085 root      20   0  128.2g  48000  34560 S   0.0   0.1   0:20.21 reactor_0'
00:30:26.822    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396085 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.21 reactor_0
00:30:26.822    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:26.822    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1}
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 396085 1
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 396085 1 idle
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:26.822    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:26.822    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:30:26.822   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396113 root      20   0  128.2g  48000  34560 S   0.0   0.1   0:09.98 reactor_1'
00:30:27.081    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396113 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.98 reactor_1
00:30:27.081    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:27.081    04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:27.081   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:30:27.082   04:20:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2
00:30:29.612   04:20:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1}
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 396085 0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 396085 0 idle
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396085 root      20   0  128.2g  60288  34560 S   0.0   0.1   0:20.31 reactor_0'
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396085 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.31 reactor_0
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1}
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 396085 1
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 396085 1 idle
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=396085
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 396085 -w 256
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 396113 root      20   0  128.2g  60288  34560 S   0.0   0.1   0:10.01 reactor_1'
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 396113 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.01 reactor_1
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:30:29.613    04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:30:29.613   04:20:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:30:29.613  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20}
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:30:29.613  rmmod nvme_tcp
00:30:29.613  rmmod nvme_fabrics
00:30:29.613  rmmod nvme_keyring
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 396085 ']'
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 396085
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 396085 ']'
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 396085
00:30:29.613    04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:29.613    04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396085
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396085'
00:30:29.613  killing process with pid 396085
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 396085
00:30:29.613   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 396085
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:29.872   04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:30:29.872    04:20:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:32.409   04:21:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:30:32.409  
00:30:32.409  real	0m18.675s
00:30:32.409  user	0m37.631s
00:30:32.409  sys	0m6.220s
00:30:32.409   04:21:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:32.409   04:21:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:30:32.409  ************************************
00:30:32.409  END TEST nvmf_interrupt
00:30:32.409  ************************************
00:30:32.409  
00:30:32.409  real	25m8.547s
00:30:32.409  user	58m43.166s
00:30:32.409  sys	6m36.543s
00:30:32.409   04:21:00 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:32.409   04:21:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:32.409  ************************************
00:30:32.409  END TEST nvmf_tcp
00:30:32.409  ************************************
00:30:32.409   04:21:00  -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]]
00:30:32.409   04:21:00  -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:30:32.409   04:21:00  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:30:32.409   04:21:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:32.409   04:21:00  -- common/autotest_common.sh@10 -- # set +x
00:30:32.409  ************************************
00:30:32.409  START TEST spdkcli_nvmf_tcp
00:30:32.409  ************************************
00:30:32.409   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:30:32.410  * Looking for test storage...
00:30:32.410  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:32.410  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:32.410  		--rc genhtml_branch_coverage=1
00:30:32.410  		--rc genhtml_function_coverage=1
00:30:32.410  		--rc genhtml_legend=1
00:30:32.410  		--rc geninfo_all_blocks=1
00:30:32.410  		--rc geninfo_unexecuted_blocks=1
00:30:32.410  		
00:30:32.410  		'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:32.410  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:32.410  		--rc genhtml_branch_coverage=1
00:30:32.410  		--rc genhtml_function_coverage=1
00:30:32.410  		--rc genhtml_legend=1
00:30:32.410  		--rc geninfo_all_blocks=1
00:30:32.410  		--rc geninfo_unexecuted_blocks=1
00:30:32.410  		
00:30:32.410  		'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:32.410  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:32.410  		--rc genhtml_branch_coverage=1
00:30:32.410  		--rc genhtml_function_coverage=1
00:30:32.410  		--rc genhtml_legend=1
00:30:32.410  		--rc geninfo_all_blocks=1
00:30:32.410  		--rc geninfo_unexecuted_blocks=1
00:30:32.410  		
00:30:32.410  		'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:32.410  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:32.410  		--rc genhtml_branch_coverage=1
00:30:32.410  		--rc genhtml_function_coverage=1
00:30:32.410  		--rc genhtml_legend=1
00:30:32.410  		--rc geninfo_all_blocks=1
00:30:32.410  		--rc geninfo_unexecuted_blocks=1
00:30:32.410  		
00:30:32.410  		'
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:32.410     04:21:00 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:32.410      04:21:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:32.410      04:21:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:32.410      04:21:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:32.410      04:21:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH
00:30:32.410      04:21:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:30:32.410  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:30:32.410    04:21:00 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=398224
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 398224
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 398224 ']'
00:30:32.410   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:32.411   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:32.411   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:32.411  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:32.411   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:32.411   04:21:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:32.411  [2024-12-09 04:21:00.783118] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:30:32.411  [2024-12-09 04:21:00.783194] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398224 ]
00:30:32.411  [2024-12-09 04:21:00.853456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:30:32.411  [2024-12-09 04:21:00.914230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:30:32.411  [2024-12-09 04:21:00.914234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]]
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:32.670   04:21:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True
00:30:32.670  '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True
00:30:32.670  '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True
00:30:32.670  '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True
00:30:32.670  '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True
00:30:32.670  '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True
00:30:32.670  '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True
00:30:32.670  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:30:32.670  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:30:32.670  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\''
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True
00:30:32.670  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True
00:30:32.670  '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\''
00:30:32.670  '
00:30:35.202  [2024-12-09 04:21:03.749645] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:30:36.579  [2024-12-09 04:21:05.022061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 ***
00:30:39.105  [2024-12-09 04:21:07.369223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 ***
00:30:40.998  [2024-12-09 04:21:09.379566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 ***
00:30:42.369  Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True]
00:30:42.369  Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True]
00:30:42.369  Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True]
00:30:42.369  Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True]
00:30:42.369  Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True]
00:30:42.369  Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True]
00:30:42.369  Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True]
00:30:42.369  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:30:42.369  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:30:42.369  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True]
00:30:42.369  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True]
00:30:42.369  Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False]
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match
00:30:42.627   04:21:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:43.194   04:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\''
00:30:43.194  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\''
00:30:43.194  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:30:43.194  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\''
00:30:43.194  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\''
00:30:43.194  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\''
00:30:43.194  '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\''
00:30:43.194  '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:30:43.194  '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\''
00:30:43.194  '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\''
00:30:43.194  '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\''
00:30:43.194  '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\''
00:30:43.194  '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\''
00:30:43.194  '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\''
00:30:43.194  '
00:30:48.457  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False]
00:30:48.457  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False]
00:30:48.457  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False]
00:30:48.457  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False]
00:30:48.457  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False]
00:30:48.457  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False]
00:30:48.457  Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False]
00:30:48.457  Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False]
00:30:48.457  Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False]
00:30:48.457  Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False]
00:30:48.457  Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False]
00:30:48.457  Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False]
00:30:48.457  Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False]
00:30:48.457  Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False]
00:30:48.457   04:21:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config
00:30:48.457   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:48.457   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 398224
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 398224 ']'
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 398224
00:30:48.715    04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:48.715    04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 398224
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 398224'
00:30:48.715  killing process with pid 398224
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 398224
00:30:48.715   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 398224
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']'
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 398224 ']'
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 398224
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 398224 ']'
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 398224
00:30:48.975  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (398224) - No such process
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 398224 is not found'
00:30:48.975  Process with pid 398224 is not found
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']'
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']'
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio
00:30:48.975  
00:30:48.975  real	0m16.770s
00:30:48.975  user	0m35.667s
00:30:48.975  sys	0m0.884s
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:48.975   04:21:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:30:48.975  ************************************
00:30:48.975  END TEST spdkcli_nvmf_tcp
00:30:48.975  ************************************
00:30:48.975   04:21:17  -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:30:48.975   04:21:17  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:30:48.975   04:21:17  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:48.975   04:21:17  -- common/autotest_common.sh@10 -- # set +x
00:30:48.975  ************************************
00:30:48.975  START TEST nvmf_identify_passthru
00:30:48.975  ************************************
00:30:48.975   04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:30:48.975  * Looking for test storage...
00:30:48.975  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:30:48.975    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:48.975     04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version
00:30:48.975     04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:48.975    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-:
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-:
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<'
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:48.975    04:21:17 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0
00:30:48.975    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:48.975    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:48.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:48.975  		--rc genhtml_branch_coverage=1
00:30:48.975  		--rc genhtml_function_coverage=1
00:30:48.975  		--rc genhtml_legend=1
00:30:48.975  		--rc geninfo_all_blocks=1
00:30:48.975  		--rc geninfo_unexecuted_blocks=1
00:30:48.975  		
00:30:48.975  		'
00:30:48.975    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:48.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:48.975  		--rc genhtml_branch_coverage=1
00:30:48.975  		--rc genhtml_function_coverage=1
00:30:48.975  		--rc genhtml_legend=1
00:30:48.975  		--rc geninfo_all_blocks=1
00:30:48.975  		--rc geninfo_unexecuted_blocks=1
00:30:48.975  		
00:30:48.975  		'
00:30:48.975    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:48.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:48.975  		--rc genhtml_branch_coverage=1
00:30:48.975  		--rc genhtml_function_coverage=1
00:30:48.975  		--rc genhtml_legend=1
00:30:48.975  		--rc geninfo_all_blocks=1
00:30:48.975  		--rc geninfo_unexecuted_blocks=1
00:30:48.975  		
00:30:48.975  		'
00:30:48.975    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:48.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:48.975  		--rc genhtml_branch_coverage=1
00:30:48.975  		--rc genhtml_function_coverage=1
00:30:48.975  		--rc genhtml_legend=1
00:30:48.975  		--rc geninfo_all_blocks=1
00:30:48.975  		--rc geninfo_unexecuted_blocks=1
00:30:48.975  		
00:30:48.975  		'
00:30:48.975   04:21:17 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:30:48.975     04:21:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:30:48.975     04:21:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:48.975     04:21:17 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:48.975      04:21:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.975      04:21:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.975      04:21:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.975      04:21:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH
00:30:48.975      04:21:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:30:48.975  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:30:48.975    04:21:17 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0
00:30:48.975   04:21:17 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:30:48.976    04:21:17 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob
00:30:48.976    04:21:17 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:48.976    04:21:17 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:48.976    04:21:17 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:48.976     04:21:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.976     04:21:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.976     04:21:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.976     04:21:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH
00:30:48.976     04:21:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:48.976   04:21:17 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:48.976   04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:30:48.976    04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:30:48.976   04:21:17 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable
00:30:48.976   04:21:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=()
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=()
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=()
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=()
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=()
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=()
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=()
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:30:51.509  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:30:51.509  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:30:51.509  Found net devices under 0000:0a:00.0: cvl_0_0
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:30:51.509  Found net devices under 0000:0a:00.1: cvl_0_1
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:30:51.509  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:30:51.509  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms
00:30:51.509  
00:30:51.509  --- 10.0.0.2 ping statistics ---
00:30:51.509  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:51.509  rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:30:51.509  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:30:51.509  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms
00:30:51.509  
00:30:51.509  --- 10.0.0.1 ping statistics ---
00:30:51.509  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:51.509  rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:30:51.509   04:21:19 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:30:51.509   04:21:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify
00:30:51.509   04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:51.509   04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:30:51.509    04:21:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf
00:30:51.509    04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=()
00:30:51.509    04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs
00:30:51.509    04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:30:51.509     04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:30:51.509     04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=()
00:30:51.509     04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs
00:30:51.509     04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:30:51.509      04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:30:51.509      04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:30:51.509     04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:30:51.509     04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0
00:30:51.509    04:21:19 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0
00:30:51.509   04:21:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0
00:30:51.509   04:21:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']'
00:30:51.509    04:21:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0
00:30:51.509    04:21:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:'
00:30:51.509    04:21:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}'
00:30:55.699   04:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN
00:30:55.699    04:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0
00:30:55.699    04:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:'
00:30:55.699    04:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}'
00:30:59.898   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL
00:30:59.898   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:30:59.898   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:30:59.898   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=403429
00:30:59.898   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:30:59.898   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:30:59.898   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 403429
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 403429 ']'
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:59.898  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:59.898   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:59.899   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:30:59.899  [2024-12-09 04:21:28.363088] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:30:59.899  [2024-12-09 04:21:28.363191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:30:59.899  [2024-12-09 04:21:28.436976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:31:00.157  [2024-12-09 04:21:28.496191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:00.157  [2024-12-09 04:21:28.496243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:00.157  [2024-12-09 04:21:28.496278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:00.157  [2024-12-09 04:21:28.496298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:00.157  [2024-12-09 04:21:28.496327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:00.157  [2024-12-09 04:21:28.497786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:00.157  [2024-12-09 04:21:28.497842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:31:00.157  [2024-12-09 04:21:28.497909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:31:00.157  [2024-12-09 04:21:28.497913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0
00:31:00.157   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:00.157  INFO: Log level set to 20
00:31:00.157  INFO: Requests:
00:31:00.157  {
00:31:00.157    "jsonrpc": "2.0",
00:31:00.157    "method": "nvmf_set_config",
00:31:00.157    "id": 1,
00:31:00.157    "params": {
00:31:00.157      "admin_cmd_passthru": {
00:31:00.157        "identify_ctrlr": true
00:31:00.157      }
00:31:00.157    }
00:31:00.157  }
00:31:00.157  
00:31:00.157  INFO: response:
00:31:00.157  {
00:31:00.157    "jsonrpc": "2.0",
00:31:00.157    "id": 1,
00:31:00.157    "result": true
00:31:00.157  }
00:31:00.157  
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:00.157   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:00.157  INFO: Setting log level to 20
00:31:00.157  INFO: Setting log level to 20
00:31:00.157  INFO: Log level set to 20
00:31:00.157  INFO: Log level set to 20
00:31:00.157  INFO: Requests:
00:31:00.157  {
00:31:00.157    "jsonrpc": "2.0",
00:31:00.157    "method": "framework_start_init",
00:31:00.157    "id": 1
00:31:00.157  }
00:31:00.157  
00:31:00.157  INFO: Requests:
00:31:00.157  {
00:31:00.157    "jsonrpc": "2.0",
00:31:00.157    "method": "framework_start_init",
00:31:00.157    "id": 1
00:31:00.157  }
00:31:00.157  
00:31:00.157  [2024-12-09 04:21:28.706967] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled
00:31:00.157  INFO: response:
00:31:00.157  {
00:31:00.157    "jsonrpc": "2.0",
00:31:00.157    "id": 1,
00:31:00.157    "result": true
00:31:00.157  }
00:31:00.157  
00:31:00.157  INFO: response:
00:31:00.157  {
00:31:00.157    "jsonrpc": "2.0",
00:31:00.157    "id": 1,
00:31:00.157    "result": true
00:31:00.157  }
00:31:00.157  
00:31:00.157   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:00.157   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:31:00.158   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:00.158   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:00.158  INFO: Setting log level to 40
00:31:00.158  INFO: Setting log level to 40
00:31:00.158  INFO: Setting log level to 40
00:31:00.158  [2024-12-09 04:21:28.717085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:00.158   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:00.158   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt
00:31:00.158   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:00.158   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:00.416   04:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0
00:31:00.416   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:00.416   04:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:03.700  Nvme0n1
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:03.700  [2024-12-09 04:21:31.612957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:03.700  [
00:31:03.700  {
00:31:03.700  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:31:03.700  "subtype": "Discovery",
00:31:03.700  "listen_addresses": [],
00:31:03.700  "allow_any_host": true,
00:31:03.700  "hosts": []
00:31:03.700  },
00:31:03.700  {
00:31:03.700  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:31:03.700  "subtype": "NVMe",
00:31:03.700  "listen_addresses": [
00:31:03.700  {
00:31:03.700  "trtype": "TCP",
00:31:03.700  "adrfam": "IPv4",
00:31:03.700  "traddr": "10.0.0.2",
00:31:03.700  "trsvcid": "4420"
00:31:03.700  }
00:31:03.700  ],
00:31:03.700  "allow_any_host": true,
00:31:03.700  "hosts": [],
00:31:03.700  "serial_number": "SPDK00000000000001",
00:31:03.700  "model_number": "SPDK bdev Controller",
00:31:03.700  "max_namespaces": 1,
00:31:03.700  "min_cntlid": 1,
00:31:03.700  "max_cntlid": 65519,
00:31:03.700  "namespaces": [
00:31:03.700  {
00:31:03.700  "nsid": 1,
00:31:03.700  "bdev_name": "Nvme0n1",
00:31:03.700  "name": "Nvme0n1",
00:31:03.700  "nguid": "B7D66A4EC3A448028E2A9F1FD3D2414C",
00:31:03.700  "uuid": "b7d66a4e-c3a4-4802-8e2a-9f1fd3d2414c"
00:31:03.700  }
00:31:03.700  ]
00:31:03.700  }
00:31:03.700  ]
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:03.700    04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:31:03.700    04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:'
00:31:03.700    04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}'
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN
00:31:03.700    04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:31:03.700    04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:'
00:31:03.700    04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}'
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']'
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']'
00:31:03.700   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:31:03.700   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:03.701   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:03.701   04:21:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:03.701   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT
00:31:03.701   04:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini
00:31:03.701   04:21:31 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup
00:31:03.701   04:21:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync
00:31:03.701   04:21:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:31:03.701   04:21:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e
00:31:03.701   04:21:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20}
00:31:03.701   04:21:31 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:31:03.701  rmmod nvme_tcp
00:31:03.701  rmmod nvme_fabrics
00:31:03.701  rmmod nvme_keyring
00:31:03.701   04:21:32 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:31:03.701   04:21:32 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e
00:31:03.701   04:21:32 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0
00:31:03.701   04:21:32 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 403429 ']'
00:31:03.701   04:21:32 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 403429
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 403429 ']'
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 403429
00:31:03.701    04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:03.701    04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403429
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403429'
00:31:03.701  killing process with pid 403429
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 403429
00:31:03.701   04:21:32 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 403429
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns
00:31:05.080   04:21:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:05.080   04:21:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:31:05.080    04:21:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:07.614   04:21:35 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:31:07.614  
00:31:07.614  real	0m18.308s
00:31:07.614  user	0m26.409s
00:31:07.614  sys	0m3.184s
00:31:07.614   04:21:35 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:07.614   04:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:31:07.614  ************************************
00:31:07.614  END TEST nvmf_identify_passthru
00:31:07.614  ************************************
00:31:07.614   04:21:35  -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh
00:31:07.614   04:21:35  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:31:07.614   04:21:35  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:07.614   04:21:35  -- common/autotest_common.sh@10 -- # set +x
00:31:07.614  ************************************
00:31:07.614  START TEST nvmf_dif
00:31:07.614  ************************************
00:31:07.614   04:21:35 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh
00:31:07.614  * Looking for test storage...
00:31:07.614  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:31:07.614    04:21:35 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:07.614     04:21:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version
00:31:07.614     04:21:35 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:07.614    04:21:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-:
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-:
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<'
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@345 -- # : 1
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@365 -- # decimal 1
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@353 -- # local d=1
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@355 -- # echo 1
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@366 -- # decimal 2
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@353 -- # local d=2
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@355 -- # echo 2
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:07.614    04:21:35 nvmf_dif -- scripts/common.sh@368 -- # return 0
00:31:07.614    04:21:35 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:07.614    04:21:35 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:07.614  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:07.614  		--rc genhtml_branch_coverage=1
00:31:07.614  		--rc genhtml_function_coverage=1
00:31:07.614  		--rc genhtml_legend=1
00:31:07.614  		--rc geninfo_all_blocks=1
00:31:07.614  		--rc geninfo_unexecuted_blocks=1
00:31:07.614  		
00:31:07.614  		'
00:31:07.614    04:21:35 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:07.614  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:07.614  		--rc genhtml_branch_coverage=1
00:31:07.614  		--rc genhtml_function_coverage=1
00:31:07.614  		--rc genhtml_legend=1
00:31:07.614  		--rc geninfo_all_blocks=1
00:31:07.614  		--rc geninfo_unexecuted_blocks=1
00:31:07.614  		
00:31:07.614  		'
00:31:07.614    04:21:35 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:07.614  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:07.614  		--rc genhtml_branch_coverage=1
00:31:07.614  		--rc genhtml_function_coverage=1
00:31:07.614  		--rc genhtml_legend=1
00:31:07.614  		--rc geninfo_all_blocks=1
00:31:07.614  		--rc geninfo_unexecuted_blocks=1
00:31:07.614  		
00:31:07.614  		'
00:31:07.614    04:21:35 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:07.614  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:07.614  		--rc genhtml_branch_coverage=1
00:31:07.614  		--rc genhtml_function_coverage=1
00:31:07.614  		--rc genhtml_legend=1
00:31:07.614  		--rc geninfo_all_blocks=1
00:31:07.614  		--rc geninfo_unexecuted_blocks=1
00:31:07.614  		
00:31:07.614  		'
00:31:07.614   04:21:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:31:07.614     04:21:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:31:07.614     04:21:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:31:07.614    04:21:35 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:07.614     04:21:35 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:07.614      04:21:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:07.615      04:21:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:07.615      04:21:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:07.615      04:21:35 nvmf_dif -- paths/export.sh@5 -- # export PATH
00:31:07.615      04:21:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@51 -- # : 0
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:31:07.615  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:31:07.615    04:21:35 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0
00:31:07.615   04:21:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16
00:31:07.615   04:21:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512
00:31:07.615   04:21:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64
00:31:07.615   04:21:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1
00:31:07.615   04:21:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:07.615   04:21:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:31:07.615    04:21:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:31:07.615   04:21:35 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable
00:31:07.615   04:21:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=()
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=()
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=()
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=()
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@320 -- # e810=()
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@321 -- # x722=()
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@322 -- # mlx=()
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:31:10.146  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:31:10.146  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:31:10.146  Found net devices under 0000:0a:00.0: cvl_0_0
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:31:10.146  Found net devices under 0000:0a:00.1: cvl_0_1
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:31:10.146  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:31:10.146  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms
00:31:10.146  
00:31:10.146  --- 10.0.0.2 ping statistics ---
00:31:10.146  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:10.146  rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:31:10.146  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:31:10.146  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms
00:31:10.146  
00:31:10.146  --- 10.0.0.1 ping statistics ---
00:31:10.146  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:10.146  rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@450 -- # return 0
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']'
00:31:10.146   04:21:38 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:31:10.713  0000:00:04.7 (8086 0e27): Already using the vfio-pci driver
00:31:10.713  0000:88:00.0 (8086 0a54): Already using the vfio-pci driver
00:31:10.713  0000:00:04.6 (8086 0e26): Already using the vfio-pci driver
00:31:10.713  0000:00:04.5 (8086 0e25): Already using the vfio-pci driver
00:31:10.713  0000:00:04.4 (8086 0e24): Already using the vfio-pci driver
00:31:10.713  0000:00:04.3 (8086 0e23): Already using the vfio-pci driver
00:31:10.713  0000:00:04.2 (8086 0e22): Already using the vfio-pci driver
00:31:10.713  0000:00:04.1 (8086 0e21): Already using the vfio-pci driver
00:31:10.713  0000:00:04.0 (8086 0e20): Already using the vfio-pci driver
00:31:10.713  0000:80:04.7 (8086 0e27): Already using the vfio-pci driver
00:31:10.713  0000:80:04.6 (8086 0e26): Already using the vfio-pci driver
00:31:10.713  0000:80:04.5 (8086 0e25): Already using the vfio-pci driver
00:31:10.713  0000:80:04.4 (8086 0e24): Already using the vfio-pci driver
00:31:10.713  0000:80:04.3 (8086 0e23): Already using the vfio-pci driver
00:31:10.713  0000:80:04.2 (8086 0e22): Already using the vfio-pci driver
00:31:10.713  0000:80:04.1 (8086 0e21): Already using the vfio-pci driver
00:31:10.713  0000:80:04.0 (8086 0e20): Already using the vfio-pci driver
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:31:10.971   04:21:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip'
00:31:10.971   04:21:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=406696
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:31:10.971   04:21:39 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 406696
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 406696 ']'
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:10.971  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:10.971   04:21:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:10.971  [2024-12-09 04:21:39.528929] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:31:10.971  [2024-12-09 04:21:39.529008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:31:11.229  [2024-12-09 04:21:39.600511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:11.229  [2024-12-09 04:21:39.658177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:11.229  [2024-12-09 04:21:39.658223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:11.229  [2024-12-09 04:21:39.658252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:11.229  [2024-12-09 04:21:39.658263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:11.229  [2024-12-09 04:21:39.658280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:11.229  [2024-12-09 04:21:39.658921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:11.229   04:21:39 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:11.229   04:21:39 nvmf_dif -- common/autotest_common.sh@868 -- # return 0
00:31:11.229   04:21:39 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:31:11.229   04:21:39 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:11.229   04:21:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:11.229   04:21:39 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:31:11.229   04:21:39 nvmf_dif -- target/dif.sh@139 -- # create_transport
00:31:11.229   04:21:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip
00:31:11.229   04:21:39 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:11.230   04:21:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:11.488  [2024-12-09 04:21:39.806694] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:11.488   04:21:39 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:11.488   04:21:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1
00:31:11.488   04:21:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:31:11.488   04:21:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:11.488   04:21:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:11.488  ************************************
00:31:11.488  START TEST fio_dif_1_default
00:31:11.488  ************************************
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@"
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:31:11.488  bdev_null0
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:31:11.488  [2024-12-09 04:21:39.862959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62
00:31:11.488    04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0
00:31:11.488    04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:31:11.488    04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=()
00:31:11.488    04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config
00:31:11.488    04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:11.488    04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:11.488  {
00:31:11.488    "params": {
00:31:11.488      "name": "Nvme$subsystem",
00:31:11.488      "trtype": "$TEST_TRANSPORT",
00:31:11.488      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:11.488      "adrfam": "ipv4",
00:31:11.488      "trsvcid": "$NVMF_PORT",
00:31:11.488      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:11.488      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:11.488      "hdgst": ${hdgst:-false},
00:31:11.488      "ddgst": ${ddgst:-false}
00:31:11.488    },
00:31:11.488    "method": "bdev_nvme_attach_controller"
00:31:11.488  }
00:31:11.488  EOF
00:31:11.488  )")
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:11.488   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib=
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat
00:31:11.489     04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 ))
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files ))
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq .
00:31:11.489     04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=,
00:31:11.489     04:21:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:11.489    "params": {
00:31:11.489      "name": "Nvme0",
00:31:11.489      "trtype": "tcp",
00:31:11.489      "traddr": "10.0.0.2",
00:31:11.489      "adrfam": "ipv4",
00:31:11.489      "trsvcid": "4420",
00:31:11.489      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:31:11.489      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:31:11.489      "hdgst": false,
00:31:11.489      "ddgst": false
00:31:11.489    },
00:31:11.489    "method": "bdev_nvme_attach_controller"
00:31:11.489  }'
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:31:11.489    04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:31:11.489   04:21:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:11.747  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:31:11.747  fio-3.35
00:31:11.747  Starting 1 thread
00:31:23.948  
00:31:23.948  filename0: (groupid=0, jobs=1): err= 0: pid=406924: Mon Dec  9 04:21:50 2024
00:31:23.948    read: IOPS=100, BW=401KiB/s (410kB/s)(4016KiB/10018msec)
00:31:23.948      slat (usec): min=5, max=112, avg= 9.22, stdev= 5.00
00:31:23.948      clat (usec): min=554, max=45171, avg=39879.98, stdev=6656.39
00:31:23.948       lat (usec): min=561, max=45239, avg=39889.20, stdev=6656.19
00:31:23.948      clat percentiles (usec):
00:31:23.948       |  1.00th=[  619],  5.00th=[40633], 10.00th=[41157], 20.00th=[41157],
00:31:23.948       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:31:23.949       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:31:23.949       | 99.00th=[41681], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351],
00:31:23.949       | 99.99th=[45351]
00:31:23.949     bw (  KiB/s): min=  384, max=  448, per=99.53%, avg=400.00, stdev=22.02, samples=20
00:31:23.949     iops        : min=   96, max=  112, avg=100.00, stdev= 5.51, samples=20
00:31:23.949    lat (usec)   : 750=2.79%
00:31:23.949    lat (msec)   : 50=97.21%
00:31:23.949    cpu          : usr=90.53%, sys=9.18%, ctx=14, majf=0, minf=220
00:31:23.949    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:23.949       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:23.949       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:23.949       issued rwts: total=1004,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:23.949       latency   : target=0, window=0, percentile=100.00%, depth=4
00:31:23.949  
00:31:23.949  Run status group 0 (all jobs):
00:31:23.949     READ: bw=401KiB/s (410kB/s), 401KiB/s-401KiB/s (410kB/s-410kB/s), io=4016KiB (4112kB), run=10018-10018msec
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@"
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949  
00:31:23.949  real	0m11.252s
00:31:23.949  user	0m10.353s
00:31:23.949  sys	0m1.181s
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:31:23.949  ************************************
00:31:23.949  END TEST fio_dif_1_default
00:31:23.949  ************************************
00:31:23.949   04:21:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems
00:31:23.949   04:21:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:31:23.949   04:21:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:23.949  ************************************
00:31:23.949  START TEST fio_dif_1_multi_subsystems
00:31:23.949  ************************************
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@"
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949  bdev_null0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949  [2024-12-09 04:21:51.153608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@"
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949  bdev_null1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62
00:31:23.949    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1
00:31:23.949    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:31:23.949    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=()
00:31:23.949    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config
00:31:23.949    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:23.949   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:23.949    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:23.949  {
00:31:23.949    "params": {
00:31:23.949      "name": "Nvme$subsystem",
00:31:23.949      "trtype": "$TEST_TRANSPORT",
00:31:23.949      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:23.949      "adrfam": "ipv4",
00:31:23.949      "trsvcid": "$NVMF_PORT",
00:31:23.949      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:23.949      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:23.949      "hdgst": ${hdgst:-false},
00:31:23.950      "ddgst": ${ddgst:-false}
00:31:23.950    },
00:31:23.950    "method": "bdev_nvme_attach_controller"
00:31:23.950  }
00:31:23.950  EOF
00:31:23.950  )")
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib=
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:23.950     04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 ))
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files ))
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:23.950  {
00:31:23.950    "params": {
00:31:23.950      "name": "Nvme$subsystem",
00:31:23.950      "trtype": "$TEST_TRANSPORT",
00:31:23.950      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:23.950      "adrfam": "ipv4",
00:31:23.950      "trsvcid": "$NVMF_PORT",
00:31:23.950      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:23.950      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:23.950      "hdgst": ${hdgst:-false},
00:31:23.950      "ddgst": ${ddgst:-false}
00:31:23.950    },
00:31:23.950    "method": "bdev_nvme_attach_controller"
00:31:23.950  }
00:31:23.950  EOF
00:31:23.950  )")
00:31:23.950     04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ ))
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files ))
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq .
00:31:23.950     04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=,
00:31:23.950     04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:23.950    "params": {
00:31:23.950      "name": "Nvme0",
00:31:23.950      "trtype": "tcp",
00:31:23.950      "traddr": "10.0.0.2",
00:31:23.950      "adrfam": "ipv4",
00:31:23.950      "trsvcid": "4420",
00:31:23.950      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:31:23.950      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:31:23.950      "hdgst": false,
00:31:23.950      "ddgst": false
00:31:23.950    },
00:31:23.950    "method": "bdev_nvme_attach_controller"
00:31:23.950  },{
00:31:23.950    "params": {
00:31:23.950      "name": "Nvme1",
00:31:23.950      "trtype": "tcp",
00:31:23.950      "traddr": "10.0.0.2",
00:31:23.950      "adrfam": "ipv4",
00:31:23.950      "trsvcid": "4420",
00:31:23.950      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:31:23.950      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:31:23.950      "hdgst": false,
00:31:23.950      "ddgst": false
00:31:23.950    },
00:31:23.950    "method": "bdev_nvme_attach_controller"
00:31:23.950  }'
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:31:23.950    04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:31:23.950   04:21:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:23.950  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:31:23.950  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:31:23.950  fio-3.35
00:31:23.950  Starting 2 threads
00:31:33.915  
00:31:33.915  filename0: (groupid=0, jobs=1): err= 0: pid=408334: Mon Dec  9 04:22:02 2024
00:31:33.915    read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10012msec)
00:31:33.915      slat (nsec): min=4555, max=30922, avg=9948.10, stdev=3048.75
00:31:33.915      clat (usec): min=664, max=46323, avg=41682.34, stdev=2691.02
00:31:33.915       lat (usec): min=672, max=46354, avg=41692.29, stdev=2691.05
00:31:33.915      clat percentiles (usec):
00:31:33.915       |  1.00th=[41157],  5.00th=[41157], 10.00th=[41157], 20.00th=[41681],
00:31:33.915       | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206],
00:31:33.915       | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206],
00:31:33.915       | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400],
00:31:33.915       | 99.99th=[46400]
00:31:33.915     bw (  KiB/s): min=  352, max=  416, per=46.17%, avg=382.40, stdev=12.61, samples=20
00:31:33.915     iops        : min=   88, max=  104, avg=95.60, stdev= 3.15, samples=20
00:31:33.915    lat (usec)   : 750=0.42%
00:31:33.915    lat (msec)   : 50=99.58%
00:31:33.915    cpu          : usr=94.40%, sys=5.18%, ctx=26, majf=0, minf=154
00:31:33.915    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:33.915       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:33.915       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:33.915       issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:33.915       latency   : target=0, window=0, percentile=100.00%, depth=4
00:31:33.915  filename1: (groupid=0, jobs=1): err= 0: pid=408335: Mon Dec  9 04:22:02 2024
00:31:33.915    read: IOPS=111, BW=444KiB/s (455kB/s)(4448KiB/10018msec)
00:31:33.915      slat (nsec): min=4320, max=28564, avg=9777.12, stdev=2839.53
00:31:33.915      clat (usec): min=583, max=46373, avg=36003.21, stdev=13637.41
00:31:33.915       lat (usec): min=591, max=46387, avg=36012.99, stdev=13637.34
00:31:33.915      clat percentiles (usec):
00:31:33.915       |  1.00th=[  603],  5.00th=[  627], 10.00th=[  693], 20.00th=[41157],
00:31:33.915       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:31:33.915       | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42730],
00:31:33.915       | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400],
00:31:33.915       | 99.99th=[46400]
00:31:33.915     bw (  KiB/s): min=  384, max=  608, per=53.55%, avg=443.20, stdev=68.30, samples=20
00:31:33.915     iops        : min=   96, max=  152, avg=110.80, stdev=17.07, samples=20
00:31:33.915    lat (usec)   : 750=10.43%, 1000=2.34%
00:31:33.915    lat (msec)   : 2=0.18%, 50=87.05%
00:31:33.915    cpu          : usr=94.31%, sys=5.39%, ctx=32, majf=0, minf=135
00:31:33.915    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:33.915       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:33.915       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:33.915       issued rwts: total=1112,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:33.915       latency   : target=0, window=0, percentile=100.00%, depth=4
00:31:33.915  
00:31:33.915  Run status group 0 (all jobs):
00:31:33.915     READ: bw=827KiB/s (847kB/s), 384KiB/s-444KiB/s (393kB/s-455kB/s), io=8288KiB (8487kB), run=10012-10018msec
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@"
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.173   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@"
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.174  
00:31:34.174  real	0m11.519s
00:31:34.174  user	0m20.367s
00:31:34.174  sys	0m1.366s
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:31:34.174  ************************************
00:31:34.174  END TEST fio_dif_1_multi_subsystems
00:31:34.174  ************************************
00:31:34.174   04:22:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params
00:31:34.174   04:22:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:31:34.174   04:22:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:34.174  ************************************
00:31:34.174  START TEST fio_dif_rand_params
00:31:34.174  ************************************
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:34.174  bdev_null0
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:34.174  [2024-12-09 04:22:02.725825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:34.174  {
00:31:34.174    "params": {
00:31:34.174      "name": "Nvme$subsystem",
00:31:34.174      "trtype": "$TEST_TRANSPORT",
00:31:34.174      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:34.174      "adrfam": "ipv4",
00:31:34.174      "trsvcid": "$NVMF_PORT",
00:31:34.174      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:34.174      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:34.174      "hdgst": ${hdgst:-false},
00:31:34.174      "ddgst": ${ddgst:-false}
00:31:34.174    },
00:31:34.174    "method": "bdev_nvme_attach_controller"
00:31:34.174  }
00:31:34.174  EOF
00:31:34.174  )")
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:31:34.174   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:34.174     04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:34.174    04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:31:34.174     04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:31:34.174     04:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:34.174    "params": {
00:31:34.174      "name": "Nvme0",
00:31:34.174      "trtype": "tcp",
00:31:34.174      "traddr": "10.0.0.2",
00:31:34.174      "adrfam": "ipv4",
00:31:34.174      "trsvcid": "4420",
00:31:34.174      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:31:34.174      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:31:34.174      "hdgst": false,
00:31:34.174      "ddgst": false
00:31:34.174    },
00:31:34.174    "method": "bdev_nvme_attach_controller"
00:31:34.174  }'
00:31:34.432   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:34.432   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:34.432   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:34.432    04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:34.432    04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:31:34.432    04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:34.432   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:34.432   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:34.432   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:31:34.432   04:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:34.432  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:31:34.432  ...
00:31:34.432  fio-3.35
00:31:34.432  Starting 3 threads
00:31:41.008  
00:31:41.008  filename0: (groupid=0, jobs=1): err= 0: pid=409730: Mon Dec  9 04:22:08 2024
00:31:41.008    read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(139MiB/5004msec)
00:31:41.008      slat (nsec): min=4462, max=31607, avg=14134.74, stdev=2203.46
00:31:41.008      clat (usec): min=6616, max=96083, avg=13516.41, stdev=6918.41
00:31:41.008       lat (usec): min=6629, max=96097, avg=13530.54, stdev=6918.25
00:31:41.008      clat percentiles (usec):
00:31:41.008       |  1.00th=[ 7898],  5.00th=[ 9503], 10.00th=[10290], 20.00th=[11076],
00:31:41.008       | 30.00th=[11600], 40.00th=[12125], 50.00th=[12387], 60.00th=[12780],
00:31:41.008       | 70.00th=[13304], 80.00th=[14091], 90.00th=[15270], 95.00th=[16319],
00:31:41.008       | 99.00th=[52691], 99.50th=[54264], 99.90th=[55313], 99.95th=[95945],
00:31:41.008       | 99.99th=[95945]
00:31:41.008     bw (  KiB/s): min=22016, max=32000, per=32.98%, avg=28339.20, stdev=3398.47, samples=10
00:31:41.008     iops        : min=  172, max=  250, avg=221.40, stdev=26.55, samples=10
00:31:41.008    lat (msec)   : 10=8.21%, 20=89.18%, 50=0.63%, 100=1.98%
00:31:41.008    cpu          : usr=94.24%, sys=5.28%, ctx=6, majf=0, minf=88
00:31:41.008    IO depths    : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:41.008       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:41.008       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:41.008       issued rwts: total=1109,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:41.008       latency   : target=0, window=0, percentile=100.00%, depth=3
00:31:41.008  filename0: (groupid=0, jobs=1): err= 0: pid=409731: Mon Dec  9 04:22:08 2024
00:31:41.008    read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(142MiB/5046msec)
00:31:41.008      slat (nsec): min=4378, max=49188, avg=15151.34, stdev=3144.42
00:31:41.008      clat (usec): min=4770, max=53061, avg=13268.00, stdev=4667.99
00:31:41.008       lat (usec): min=4783, max=53075, avg=13283.15, stdev=4668.01
00:31:41.008      clat percentiles (usec):
00:31:41.008       |  1.00th=[ 5014],  5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[11469],
00:31:41.008       | 30.00th=[11994], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566],
00:31:41.008       | 70.00th=[14091], 80.00th=[14746], 90.00th=[15664], 95.00th=[16581],
00:31:41.008       | 99.00th=[47973], 99.50th=[49546], 99.90th=[51643], 99.95th=[53216],
00:31:41.008       | 99.99th=[53216]
00:31:41.008     bw (  KiB/s): min=26368, max=31744, per=33.76%, avg=29004.80, stdev=1736.49, samples=10
00:31:41.008     iops        : min=  206, max=  248, avg=226.60, stdev=13.57, samples=10
00:31:41.008    lat (msec)   : 10=12.32%, 20=86.36%, 50=1.06%, 100=0.26%
00:31:41.008    cpu          : usr=88.70%, sys=8.03%, ctx=280, majf=0, minf=115
00:31:41.008    IO depths    : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:41.008       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:41.008       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:41.008       issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:41.008       latency   : target=0, window=0, percentile=100.00%, depth=3
00:31:41.008  filename0: (groupid=0, jobs=1): err= 0: pid=409732: Mon Dec  9 04:22:08 2024
00:31:41.008    read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(143MiB/5045msec)
00:31:41.008      slat (nsec): min=4425, max=24442, avg=14018.58, stdev=1379.54
00:31:41.008      clat (usec): min=4939, max=57874, avg=13200.02, stdev=6038.80
00:31:41.008       lat (usec): min=4951, max=57887, avg=13214.04, stdev=6038.61
00:31:41.008      clat percentiles (usec):
00:31:41.008       |  1.00th=[ 7439],  5.00th=[ 8848], 10.00th=[10028], 20.00th=[11076],
00:31:41.008       | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911],
00:31:41.008       | 70.00th=[13304], 80.00th=[13960], 90.00th=[14877], 95.00th=[15664],
00:31:41.008       | 99.00th=[51643], 99.50th=[52691], 99.90th=[56361], 99.95th=[57934],
00:31:41.008       | 99.99th=[57934]
00:31:41.008     bw (  KiB/s): min=18944, max=33792, per=33.94%, avg=29158.40, stdev=3946.52, samples=10
00:31:41.008     iops        : min=  148, max=  264, avg=227.80, stdev=30.83, samples=10
00:31:41.008    lat (msec)   : 10=9.46%, 20=88.27%, 50=0.96%, 100=1.31%
00:31:41.008    cpu          : usr=92.76%, sys=6.76%, ctx=6, majf=0, minf=70
00:31:41.008    IO depths    : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:41.008       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:41.008       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:41.008       issued rwts: total=1142,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:41.008       latency   : target=0, window=0, percentile=100.00%, depth=3
00:31:41.008  
00:31:41.008  Run status group 0 (all jobs):
00:31:41.008     READ: bw=83.9MiB/s (88.0MB/s), 27.7MiB/s-28.3MiB/s (29.0MB/s-29.7MB/s), io=423MiB (444MB), run=5004-5046msec
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime=
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.008   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009  bdev_null0
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009  [2024-12-09 04:22:08.845927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009  bdev_null1
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009  bdev_null2
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:41.009  {
00:31:41.009    "params": {
00:31:41.009      "name": "Nvme$subsystem",
00:31:41.009      "trtype": "$TEST_TRANSPORT",
00:31:41.009      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:41.009      "adrfam": "ipv4",
00:31:41.009      "trsvcid": "$NVMF_PORT",
00:31:41.009      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:41.009      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:41.009      "hdgst": ${hdgst:-false},
00:31:41.009      "ddgst": ${ddgst:-false}
00:31:41.009    },
00:31:41.009    "method": "bdev_nvme_attach_controller"
00:31:41.009  }
00:31:41.009  EOF
00:31:41.009  )")
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:31:41.009   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:31:41.009     04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:41.009  {
00:31:41.009    "params": {
00:31:41.009      "name": "Nvme$subsystem",
00:31:41.009      "trtype": "$TEST_TRANSPORT",
00:31:41.009      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:41.009      "adrfam": "ipv4",
00:31:41.009      "trsvcid": "$NVMF_PORT",
00:31:41.009      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:41.009      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:41.009      "hdgst": ${hdgst:-false},
00:31:41.009      "ddgst": ${ddgst:-false}
00:31:41.009    },
00:31:41.009    "method": "bdev_nvme_attach_controller"
00:31:41.009  }
00:31:41.009  EOF
00:31:41.009  )")
00:31:41.009     04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:41.009  {
00:31:41.009    "params": {
00:31:41.009      "name": "Nvme$subsystem",
00:31:41.009      "trtype": "$TEST_TRANSPORT",
00:31:41.009      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:41.009      "adrfam": "ipv4",
00:31:41.009      "trsvcid": "$NVMF_PORT",
00:31:41.009      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:41.009      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:41.009      "hdgst": ${hdgst:-false},
00:31:41.009      "ddgst": ${ddgst:-false}
00:31:41.009    },
00:31:41.009    "method": "bdev_nvme_attach_controller"
00:31:41.009  }
00:31:41.009  EOF
00:31:41.009  )")
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:31:41.009     04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:31:41.009    04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:31:41.010     04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:31:41.010     04:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:41.010    "params": {
00:31:41.010      "name": "Nvme0",
00:31:41.010      "trtype": "tcp",
00:31:41.010      "traddr": "10.0.0.2",
00:31:41.010      "adrfam": "ipv4",
00:31:41.010      "trsvcid": "4420",
00:31:41.010      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:31:41.010      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:31:41.010      "hdgst": false,
00:31:41.010      "ddgst": false
00:31:41.010    },
00:31:41.010    "method": "bdev_nvme_attach_controller"
00:31:41.010  },{
00:31:41.010    "params": {
00:31:41.010      "name": "Nvme1",
00:31:41.010      "trtype": "tcp",
00:31:41.010      "traddr": "10.0.0.2",
00:31:41.010      "adrfam": "ipv4",
00:31:41.010      "trsvcid": "4420",
00:31:41.010      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:31:41.010      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:31:41.010      "hdgst": false,
00:31:41.010      "ddgst": false
00:31:41.010    },
00:31:41.010    "method": "bdev_nvme_attach_controller"
00:31:41.010  },{
00:31:41.010    "params": {
00:31:41.010      "name": "Nvme2",
00:31:41.010      "trtype": "tcp",
00:31:41.010      "traddr": "10.0.0.2",
00:31:41.010      "adrfam": "ipv4",
00:31:41.010      "trsvcid": "4420",
00:31:41.010      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:31:41.010      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:31:41.010      "hdgst": false,
00:31:41.010      "ddgst": false
00:31:41.010    },
00:31:41.010    "method": "bdev_nvme_attach_controller"
00:31:41.010  }'
00:31:41.010   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:41.010   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:41.010   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:41.010    04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:41.010    04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:31:41.010    04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:41.010   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:41.010   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:41.010   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:31:41.010   04:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:41.010  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:31:41.010  ...
00:31:41.010  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:31:41.010  ...
00:31:41.010  filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:31:41.010  ...
00:31:41.010  fio-3.35
00:31:41.010  Starting 24 threads
00:31:53.306  
00:31:53.306  filename0: (groupid=0, jobs=1): err= 0: pid=410594: Mon Dec  9 04:22:20 2024
00:31:53.306    read: IOPS=466, BW=1867KiB/s (1911kB/s)(18.2MiB/10012msec)
00:31:53.306      slat (usec): min=5, max=121, avg=16.37, stdev=11.55
00:31:53.306      clat (usec): min=17813, max=50875, avg=34140.27, stdev=1714.42
00:31:53.306       lat (usec): min=17858, max=50911, avg=34156.64, stdev=1713.59
00:31:53.306      clat percentiles (usec):
00:31:53.306       |  1.00th=[33424],  5.00th=[33424], 10.00th=[33817], 20.00th=[33817],
00:31:53.306       | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817],
00:31:53.306       | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390],
00:31:53.306       | 99.00th=[42206], 99.50th=[43779], 99.90th=[50594], 99.95th=[50594],
00:31:53.306       | 99.99th=[51119]
00:31:53.306     bw (  KiB/s): min= 1792, max= 1920, per=4.23%, avg=1862.40, stdev=65.33, samples=20
00:31:53.306     iops        : min=  448, max=  480, avg=465.60, stdev=16.33, samples=20
00:31:53.306    lat (msec)   : 20=0.34%, 50=99.32%, 100=0.34%
00:31:53.306    cpu          : usr=97.01%, sys=1.91%, ctx=169, majf=0, minf=47
00:31:53.306    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.306       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.306       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.306  filename0: (groupid=0, jobs=1): err= 0: pid=410595: Mon Dec  9 04:22:20 2024
00:31:53.306    read: IOPS=457, BW=1830KiB/s (1874kB/s)(18.2MiB/10178msec)
00:31:53.306      slat (nsec): min=4084, max=72154, avg=30663.92, stdev=10649.18
00:31:53.306      clat (msec): min=28, max=200, avg=34.71, stdev= 9.77
00:31:53.306       lat (msec): min=28, max=200, avg=34.74, stdev= 9.77
00:31:53.306      clat percentiles (msec):
00:31:53.306       |  1.00th=[   34],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.306       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.306       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.306       | 99.00th=[   44], 99.50th=[   65], 99.90th=[  197], 99.95th=[  197],
00:31:53.306       | 99.99th=[  201]
00:31:53.306     bw (  KiB/s): min= 1664, max= 1920, per=4.21%, avg=1855.45, stdev=78.21, samples=20
00:31:53.306     iops        : min=  416, max=  480, avg=463.85, stdev=19.56, samples=20
00:31:53.306    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.306    cpu          : usr=98.18%, sys=1.42%, ctx=23, majf=0, minf=18
00:31:53.306    IO depths    : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.306       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.306       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.306  filename0: (groupid=0, jobs=1): err= 0: pid=410596: Mon Dec  9 04:22:20 2024
00:31:53.306    read: IOPS=462, BW=1852KiB/s (1896kB/s)(18.1MiB/10024msec)
00:31:53.306      slat (usec): min=8, max=111, avg=41.68, stdev=27.59
00:31:53.306      clat (usec): min=16251, max=78695, avg=34191.55, stdev=3685.36
00:31:53.306       lat (usec): min=16263, max=78717, avg=34233.23, stdev=3682.74
00:31:53.306      clat percentiles (usec):
00:31:53.306       |  1.00th=[32637],  5.00th=[32900], 10.00th=[33162], 20.00th=[33424],
00:31:53.306       | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817],
00:31:53.306       | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35390],
00:31:53.306       | 99.00th=[44303], 99.50th=[71828], 99.90th=[78119], 99.95th=[79168],
00:31:53.306       | 99.99th=[79168]
00:31:53.306     bw (  KiB/s): min= 1664, max= 1920, per=4.20%, avg=1849.60, stdev=87.85, samples=20
00:31:53.306     iops        : min=  416, max=  480, avg=462.40, stdev=21.96, samples=20
00:31:53.306    lat (msec)   : 20=0.13%, 50=99.09%, 100=0.78%
00:31:53.306    cpu          : usr=98.18%, sys=1.42%, ctx=13, majf=0, minf=25
00:31:53.306    IO depths    : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.306       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.306       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.306  filename0: (groupid=0, jobs=1): err= 0: pid=410597: Mon Dec  9 04:22:20 2024
00:31:53.306    read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.3MiB/10195msec)
00:31:53.306      slat (usec): min=10, max=108, avg=45.67, stdev=17.23
00:31:53.306      clat (msec): min=18, max=199, avg=34.38, stdev= 9.62
00:31:53.306       lat (msec): min=18, max=199, avg=34.43, stdev= 9.62
00:31:53.306      clat percentiles (msec):
00:31:53.306       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.306       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.306       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   35],
00:31:53.306       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  197], 99.95th=[  199],
00:31:53.306       | 99.99th=[  201]
00:31:53.306     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.306     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.306    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.306    cpu          : usr=98.01%, sys=1.37%, ctx=62, majf=0, minf=15
00:31:53.306    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.306       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.306       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.306       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.306  filename0: (groupid=0, jobs=1): err= 0: pid=410598: Mon Dec  9 04:22:20 2024
00:31:53.306    read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.3MiB/10192msec)
00:31:53.306      slat (usec): min=8, max=120, avg=35.77, stdev=21.72
00:31:53.306      clat (msec): min=18, max=196, avg=34.50, stdev= 9.58
00:31:53.306       lat (msec): min=18, max=196, avg=34.53, stdev= 9.58
00:31:53.306      clat percentiles (msec):
00:31:53.306       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.306       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.306       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   35],
00:31:53.306       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  197], 99.95th=[  197],
00:31:53.306       | 99.99th=[  197]
00:31:53.306     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.306     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.307    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.307    cpu          : usr=97.45%, sys=1.61%, ctx=155, majf=0, minf=35
00:31:53.307    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename0: (groupid=0, jobs=1): err= 0: pid=410599: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.3MiB/10192msec)
00:31:53.307      slat (usec): min=9, max=110, avg=47.10, stdev=20.20
00:31:53.307      clat (msec): min=18, max=197, avg=34.39, stdev= 9.60
00:31:53.307       lat (msec): min=18, max=197, avg=34.44, stdev= 9.60
00:31:53.307      clat percentiles (msec):
00:31:53.307       |  1.00th=[   33],  5.00th=[   33], 10.00th=[   34], 20.00th=[   34],
00:31:53.307       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.307       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   35],
00:31:53.307       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  197], 99.95th=[  197],
00:31:53.307       | 99.99th=[  199]
00:31:53.307     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.307     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.307    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.307    cpu          : usr=98.59%, sys=1.02%, ctx=12, majf=0, minf=15
00:31:53.307    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename0: (groupid=0, jobs=1): err= 0: pid=410600: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=457, BW=1828KiB/s (1872kB/s)(18.1MiB/10166msec)
00:31:53.307      slat (usec): min=7, max=130, avg=41.61, stdev=17.79
00:31:53.307      clat (msec): min=18, max=197, avg=34.62, stdev= 9.91
00:31:53.307       lat (msec): min=18, max=197, avg=34.67, stdev= 9.91
00:31:53.307      clat percentiles (msec):
00:31:53.307       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.307       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.307       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.307       | 99.00th=[   43], 99.50th=[   72], 99.90th=[  199], 99.95th=[  199],
00:31:53.307       | 99.99th=[  199]
00:31:53.307     bw (  KiB/s): min= 1664, max= 1920, per=4.20%, avg=1852.00, stdev=76.27, samples=20
00:31:53.307     iops        : min=  416, max=  480, avg=463.00, stdev=19.07, samples=20
00:31:53.307    lat (msec)   : 20=0.04%, 50=99.10%, 100=0.52%, 250=0.34%
00:31:53.307    cpu          : usr=98.43%, sys=1.18%, ctx=13, majf=0, minf=28
00:31:53.307    IO depths    : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4646,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename0: (groupid=0, jobs=1): err= 0: pid=410601: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.3MiB/10188msec)
00:31:53.307      slat (usec): min=5, max=131, avg=53.88, stdev=22.92
00:31:53.307      clat (msec): min=18, max=199, avg=34.27, stdev= 9.63
00:31:53.307       lat (msec): min=18, max=199, avg=34.32, stdev= 9.63
00:31:53.307      clat percentiles (msec):
00:31:53.307       |  1.00th=[   33],  5.00th=[   33], 10.00th=[   34], 20.00th=[   34],
00:31:53.307       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.307       | 70.00th=[   34], 80.00th=[   34], 90.00th=[   35], 95.00th=[   35],
00:31:53.307       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  197], 99.95th=[  197],
00:31:53.307       | 99.99th=[  199]
00:31:53.307     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.307     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.307    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.307    cpu          : usr=98.13%, sys=1.24%, ctx=38, majf=0, minf=28
00:31:53.307    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename1: (groupid=0, jobs=1): err= 0: pid=410602: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.3MiB/10192msec)
00:31:53.307      slat (usec): min=10, max=108, avg=38.04, stdev=14.66
00:31:53.307      clat (msec): min=18, max=197, avg=34.45, stdev= 9.61
00:31:53.307       lat (msec): min=18, max=197, avg=34.48, stdev= 9.61
00:31:53.307      clat percentiles (msec):
00:31:53.307       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.307       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.307       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.307       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  197], 99.95th=[  197],
00:31:53.307       | 99.99th=[  197]
00:31:53.307     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.307     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.307    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.307    cpu          : usr=97.94%, sys=1.39%, ctx=69, majf=0, minf=35
00:31:53.307    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename1: (groupid=0, jobs=1): err= 0: pid=410603: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.2MiB/10166msec)
00:31:53.307      slat (nsec): min=12217, max=75175, avg=33623.77, stdev=8670.28
00:31:53.307      clat (msec): min=30, max=197, avg=34.64, stdev= 9.69
00:31:53.307       lat (msec): min=30, max=197, avg=34.67, stdev= 9.69
00:31:53.307      clat percentiles (msec):
00:31:53.307       |  1.00th=[   34],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.307       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.307       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.307       | 99.00th=[   43], 99.50th=[   56], 99.90th=[  199], 99.95th=[  199],
00:31:53.307       | 99.99th=[  199]
00:31:53.307     bw (  KiB/s): min= 1667, max= 1920, per=4.22%, avg=1856.15, stdev=77.30, samples=20
00:31:53.307     iops        : min=  416, max=  480, avg=464.00, stdev=19.42, samples=20
00:31:53.307    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.307    cpu          : usr=98.41%, sys=1.19%, ctx=18, majf=0, minf=28
00:31:53.307    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename1: (groupid=0, jobs=1): err= 0: pid=410604: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=458, BW=1834KiB/s (1878kB/s)(18.2MiB/10169msec)
00:31:53.307      slat (usec): min=8, max=145, avg=39.43, stdev=20.93
00:31:53.307      clat (msec): min=18, max=199, avg=34.51, stdev=10.00
00:31:53.307       lat (msec): min=18, max=199, avg=34.55, stdev=10.00
00:31:53.307      clat percentiles (msec):
00:31:53.307       |  1.00th=[   27],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.307       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.307       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.307       | 99.00th=[   43], 99.50th=[   72], 99.90th=[  197], 99.95th=[  199],
00:31:53.307       | 99.99th=[  199]
00:31:53.307     bw (  KiB/s): min= 1664, max= 1968, per=4.22%, avg=1858.40, stdev=79.29, samples=20
00:31:53.307     iops        : min=  416, max=  492, avg=464.60, stdev=19.82, samples=20
00:31:53.307    lat (msec)   : 20=0.26%, 50=98.88%, 100=0.51%, 250=0.34%
00:31:53.307    cpu          : usr=97.39%, sys=1.70%, ctx=66, majf=0, minf=20
00:31:53.307    IO depths    : 1=5.5%, 2=11.7%, 4=24.7%, 8=51.1%, 16=7.0%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4662,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename1: (groupid=0, jobs=1): err= 0: pid=410605: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10015msec)
00:31:53.307      slat (usec): min=7, max=111, avg=28.11, stdev=20.81
00:31:53.307      clat (usec): min=18629, max=52234, avg=34071.69, stdev=1704.25
00:31:53.307       lat (usec): min=18682, max=52270, avg=34099.79, stdev=1703.02
00:31:53.307      clat percentiles (usec):
00:31:53.307       |  1.00th=[32637],  5.00th=[33424], 10.00th=[33424], 20.00th=[33817],
00:31:53.307       | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817],
00:31:53.307       | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[35390],
00:31:53.307       | 99.00th=[41681], 99.50th=[42206], 99.90th=[52167], 99.95th=[52167],
00:31:53.307       | 99.99th=[52167]
00:31:53.307     bw (  KiB/s): min= 1792, max= 1920, per=4.23%, avg=1862.40, stdev=65.33, samples=20
00:31:53.307     iops        : min=  448, max=  480, avg=465.60, stdev=16.33, samples=20
00:31:53.307    lat (msec)   : 20=0.34%, 50=99.32%, 100=0.34%
00:31:53.307    cpu          : usr=97.37%, sys=1.75%, ctx=150, majf=0, minf=23
00:31:53.307    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.307       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.307       issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.307       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.307  filename1: (groupid=0, jobs=1): err= 0: pid=410606: Mon Dec  9 04:22:20 2024
00:31:53.307    read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.2MiB/10166msec)
00:31:53.307      slat (usec): min=12, max=104, avg=38.91, stdev=15.46
00:31:53.307      clat (msec): min=28, max=196, avg=34.60, stdev= 9.68
00:31:53.307       lat (msec): min=28, max=197, avg=34.64, stdev= 9.68
00:31:53.307      clat percentiles (msec):
00:31:53.308       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.308       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.308       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.308       | 99.00th=[   44], 99.50th=[   57], 99.90th=[  197], 99.95th=[  197],
00:31:53.308       | 99.99th=[  197]
00:31:53.308     bw (  KiB/s): min= 1664, max= 1920, per=4.21%, avg=1856.00, stdev=77.69, samples=20
00:31:53.308     iops        : min=  416, max=  480, avg=464.00, stdev=19.42, samples=20
00:31:53.308    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.308    cpu          : usr=98.10%, sys=1.41%, ctx=16, majf=0, minf=31
00:31:53.308    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.308  filename1: (groupid=0, jobs=1): err= 0: pid=410607: Mon Dec  9 04:22:20 2024
00:31:53.308    read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.3MiB/10192msec)
00:31:53.308      slat (nsec): min=7872, max=92328, avg=38027.84, stdev=10829.60
00:31:53.308      clat (msec): min=18, max=197, avg=34.46, stdev= 9.61
00:31:53.308       lat (msec): min=18, max=197, avg=34.49, stdev= 9.61
00:31:53.308      clat percentiles (msec):
00:31:53.308       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.308       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.308       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.308       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  197], 99.95th=[  199],
00:31:53.308       | 99.99th=[  199]
00:31:53.308     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.308     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.308    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.308    cpu          : usr=97.49%, sys=1.66%, ctx=80, majf=0, minf=26
00:31:53.308    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.308  filename1: (groupid=0, jobs=1): err= 0: pid=410608: Mon Dec  9 04:22:20 2024
00:31:53.308    read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.2MiB/10167msec)
00:31:53.308      slat (nsec): min=10977, max=88169, avg=39616.09, stdev=12199.64
00:31:53.308      clat (msec): min=32, max=197, avg=34.57, stdev= 9.72
00:31:53.308       lat (msec): min=32, max=197, avg=34.61, stdev= 9.72
00:31:53.308      clat percentiles (msec):
00:31:53.308       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.308       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.308       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.308       | 99.00th=[   42], 99.50th=[   61], 99.90th=[  199], 99.95th=[  199],
00:31:53.308       | 99.99th=[  199]
00:31:53.308     bw (  KiB/s): min= 1664, max= 1920, per=4.21%, avg=1856.00, stdev=77.69, samples=20
00:31:53.308     iops        : min=  416, max=  480, avg=464.00, stdev=19.42, samples=20
00:31:53.308    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.308    cpu          : usr=98.28%, sys=1.32%, ctx=13, majf=0, minf=17
00:31:53.308    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.308  filename1: (groupid=0, jobs=1): err= 0: pid=410609: Mon Dec  9 04:22:20 2024
00:31:53.308    read: IOPS=463, BW=1855KiB/s (1900kB/s)(18.2MiB/10038msec)
00:31:53.308      slat (usec): min=7, max=114, avg=39.00, stdev=27.42
00:31:53.308      clat (usec): min=16700, max=78393, avg=34149.31, stdev=3141.90
00:31:53.308       lat (usec): min=16712, max=78412, avg=34188.31, stdev=3139.57
00:31:53.308      clat percentiles (usec):
00:31:53.308       |  1.00th=[32637],  5.00th=[32900], 10.00th=[33162], 20.00th=[33424],
00:31:53.308       | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817],
00:31:53.308       | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390],
00:31:53.308       | 99.00th=[44827], 99.50th=[50594], 99.90th=[78119], 99.95th=[78119],
00:31:53.308       | 99.99th=[78119]
00:31:53.308     bw (  KiB/s): min= 1664, max= 1920, per=4.22%, avg=1856.15, stdev=87.75, samples=20
00:31:53.308     iops        : min=  416, max=  480, avg=464.00, stdev=22.02, samples=20
00:31:53.308    lat (msec)   : 20=0.17%, 50=99.05%, 100=0.77%
00:31:53.308    cpu          : usr=97.97%, sys=1.44%, ctx=41, majf=0, minf=32
00:31:53.308    IO depths    : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.308  filename2: (groupid=0, jobs=1): err= 0: pid=410610: Mon Dec  9 04:22:20 2024
00:31:53.308    read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.3MiB/10193msec)
00:31:53.308      slat (usec): min=8, max=126, avg=38.11, stdev=13.09
00:31:53.308      clat (msec): min=18, max=197, avg=34.44, stdev= 9.65
00:31:53.308       lat (msec): min=18, max=197, avg=34.48, stdev= 9.65
00:31:53.308      clat percentiles (msec):
00:31:53.308       |  1.00th=[   32],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.308       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.308       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.308       | 99.00th=[   43], 99.50th=[   46], 99.90th=[  197], 99.95th=[  197],
00:31:53.308       | 99.99th=[  199]
00:31:53.308     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.308     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.308    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.308    cpu          : usr=98.29%, sys=1.30%, ctx=14, majf=0, minf=19
00:31:53.308    IO depths    : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.308  filename2: (groupid=0, jobs=1): err= 0: pid=410611: Mon Dec  9 04:22:20 2024
00:31:53.308    read: IOPS=458, BW=1833KiB/s (1877kB/s)(18.2MiB/10163msec)
00:31:53.308      slat (usec): min=10, max=117, avg=36.48, stdev=12.60
00:31:53.308      clat (msec): min=28, max=197, avg=34.59, stdev= 9.68
00:31:53.308       lat (msec): min=28, max=197, avg=34.63, stdev= 9.68
00:31:53.308      clat percentiles (msec):
00:31:53.308       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.308       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.308       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.308       | 99.00th=[   43], 99.50th=[   54], 99.90th=[  199], 99.95th=[  199],
00:31:53.308       | 99.99th=[  199]
00:31:53.308     bw (  KiB/s): min= 1664, max= 1920, per=4.21%, avg=1856.00, stdev=88.10, samples=20
00:31:53.308     iops        : min=  416, max=  480, avg=464.00, stdev=22.02, samples=20
00:31:53.308    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.308    cpu          : usr=97.91%, sys=1.47%, ctx=55, majf=0, minf=24
00:31:53.308    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.308  filename2: (groupid=0, jobs=1): err= 0: pid=410612: Mon Dec  9 04:22:20 2024
00:31:53.308    read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.2MiB/10166msec)
00:31:53.308      slat (nsec): min=11408, max=65783, avg=33682.99, stdev=7920.08
00:31:53.308      clat (msec): min=28, max=197, avg=34.64, stdev= 9.70
00:31:53.308       lat (msec): min=28, max=197, avg=34.68, stdev= 9.70
00:31:53.308      clat percentiles (msec):
00:31:53.308       |  1.00th=[   34],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.308       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.308       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.308       | 99.00th=[   44], 99.50th=[   57], 99.90th=[  199], 99.95th=[  199],
00:31:53.308       | 99.99th=[  199]
00:31:53.308     bw (  KiB/s): min= 1664, max= 1920, per=4.21%, avg=1856.00, stdev=77.69, samples=20
00:31:53.308     iops        : min=  416, max=  480, avg=464.00, stdev=19.42, samples=20
00:31:53.308    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.308    cpu          : usr=96.53%, sys=2.29%, ctx=197, majf=0, minf=29
00:31:53.308    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.308  filename2: (groupid=0, jobs=1): err= 0: pid=410613: Mon Dec  9 04:22:20 2024
00:31:53.308    read: IOPS=458, BW=1835KiB/s (1879kB/s)(18.2MiB/10183msec)
00:31:53.308      slat (usec): min=9, max=129, avg=47.08, stdev=18.77
00:31:53.308      clat (msec): min=22, max=199, avg=34.44, stdev= 9.63
00:31:53.308       lat (msec): min=22, max=199, avg=34.49, stdev= 9.63
00:31:53.308      clat percentiles (msec):
00:31:53.308       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.308       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.308       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.308       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  199], 99.95th=[  199],
00:31:53.308       | 99.99th=[  199]
00:31:53.308     bw (  KiB/s): min= 1664, max= 1920, per=4.23%, avg=1862.40, stdev=77.42, samples=20
00:31:53.308     iops        : min=  416, max=  480, avg=465.60, stdev=19.35, samples=20
00:31:53.308    lat (msec)   : 50=99.61%, 100=0.04%, 250=0.34%
00:31:53.308    cpu          : usr=97.93%, sys=1.37%, ctx=69, majf=0, minf=24
00:31:53.308    IO depths    : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0%
00:31:53.308       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.308       issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.308       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.309  filename2: (groupid=0, jobs=1): err= 0: pid=410614: Mon Dec  9 04:22:20 2024
00:31:53.309    read: IOPS=457, BW=1830KiB/s (1874kB/s)(18.2MiB/10175msec)
00:31:53.309      slat (usec): min=5, max=120, avg=34.34, stdev=12.26
00:31:53.309      clat (msec): min=28, max=200, avg=34.64, stdev= 9.78
00:31:53.309       lat (msec): min=28, max=200, avg=34.67, stdev= 9.78
00:31:53.309      clat percentiles (msec):
00:31:53.309       |  1.00th=[   34],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.309       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.309       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.309       | 99.00th=[   43], 99.50th=[   63], 99.90th=[  199], 99.95th=[  199],
00:31:53.309       | 99.99th=[  201]
00:31:53.309     bw (  KiB/s): min= 1664, max= 1920, per=4.21%, avg=1856.00, stdev=77.69, samples=20
00:31:53.309     iops        : min=  416, max=  480, avg=464.00, stdev=19.42, samples=20
00:31:53.309    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.309    cpu          : usr=97.94%, sys=1.39%, ctx=93, majf=0, minf=27
00:31:53.309    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.309       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.309       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.309  filename2: (groupid=0, jobs=1): err= 0: pid=410615: Mon Dec  9 04:22:20 2024
00:31:53.309    read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.2MiB/10167msec)
00:31:53.309      slat (nsec): min=8565, max=69436, avg=33374.13, stdev=8831.97
00:31:53.309      clat (msec): min=28, max=200, avg=34.62, stdev= 9.70
00:31:53.309       lat (msec): min=28, max=200, avg=34.66, stdev= 9.70
00:31:53.309      clat percentiles (msec):
00:31:53.309       |  1.00th=[   34],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.309       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.309       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.309       | 99.00th=[   43], 99.50th=[   54], 99.90th=[  199], 99.95th=[  199],
00:31:53.309       | 99.99th=[  201]
00:31:53.309     bw (  KiB/s): min= 1664, max= 1920, per=4.21%, avg=1856.00, stdev=88.10, samples=20
00:31:53.309     iops        : min=  416, max=  480, avg=464.00, stdev=22.02, samples=20
00:31:53.309    lat (msec)   : 50=99.31%, 100=0.34%, 250=0.34%
00:31:53.309    cpu          : usr=98.48%, sys=1.13%, ctx=27, majf=0, minf=22
00:31:53.309    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:31:53.309       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.309       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.309  filename2: (groupid=0, jobs=1): err= 0: pid=410616: Mon Dec  9 04:22:20 2024
00:31:53.309    read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.3MiB/10195msec)
00:31:53.309      slat (usec): min=9, max=130, avg=43.10, stdev=20.76
00:31:53.309      clat (msec): min=18, max=199, avg=34.42, stdev= 9.62
00:31:53.309       lat (msec): min=18, max=199, avg=34.47, stdev= 9.62
00:31:53.309      clat percentiles (msec):
00:31:53.309       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   34],
00:31:53.309       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.309       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   36],
00:31:53.309       | 99.00th=[   42], 99.50th=[   43], 99.90th=[  197], 99.95th=[  197],
00:31:53.309       | 99.99th=[  201]
00:31:53.309     bw (  KiB/s): min= 1792, max= 1920, per=4.24%, avg=1868.80, stdev=64.34, samples=20
00:31:53.309     iops        : min=  448, max=  480, avg=467.20, stdev=16.08, samples=20
00:31:53.309    lat (msec)   : 20=0.34%, 50=99.32%, 250=0.34%
00:31:53.309    cpu          : usr=97.47%, sys=1.55%, ctx=163, majf=0, minf=24
00:31:53.309    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:31:53.309       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.309       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.309  filename2: (groupid=0, jobs=1): err= 0: pid=410617: Mon Dec  9 04:22:20 2024
00:31:53.309    read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.9MiB/10184msec)
00:31:53.309      slat (usec): min=6, max=127, avg=36.14, stdev=21.44
00:31:53.309      clat (msec): min=13, max=200, avg=33.34, stdev=10.08
00:31:53.309       lat (msec): min=13, max=200, avg=33.38, stdev=10.07
00:31:53.309      clat percentiles (msec):
00:31:53.309       |  1.00th=[   21],  5.00th=[   23], 10.00th=[   27], 20.00th=[   34],
00:31:53.309       | 30.00th=[   34], 40.00th=[   34], 50.00th=[   34], 60.00th=[   34],
00:31:53.309       | 70.00th=[   34], 80.00th=[   35], 90.00th=[   35], 95.00th=[   35],
00:31:53.309       | 99.00th=[   43], 99.50th=[   44], 99.90th=[  197], 99.95th=[  197],
00:31:53.309       | 99.99th=[  201]
00:31:53.309     bw (  KiB/s): min= 1792, max= 2400, per=4.39%, avg=1933.00, stdev=163.72, samples=20
00:31:53.309     iops        : min=  448, max=  600, avg=483.25, stdev=40.93, samples=20
00:31:53.309    lat (msec)   : 20=0.08%, 50=99.59%, 250=0.33%
00:31:53.309    cpu          : usr=98.55%, sys=1.05%, ctx=14, majf=0, minf=19
00:31:53.309    IO depths    : 1=5.2%, 2=10.5%, 4=21.9%, 8=55.0%, 16=7.4%, 32=0.0%, >=64=0.0%
00:31:53.309       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       complete  : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:53.309       issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:53.309       latency   : target=0, window=0, percentile=100.00%, depth=16
00:31:53.309  
00:31:53.309  Run status group 0 (all jobs):
00:31:53.309     READ: bw=43.0MiB/s (45.1MB/s), 1828KiB/s-1904KiB/s (1872kB/s-1950kB/s), io=438MiB (460MB), run=10012-10195msec
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309  bdev_null0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.309   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.310  [2024-12-09 04:22:20.605147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.310  bdev_null1
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:53.310  {
00:31:53.310    "params": {
00:31:53.310      "name": "Nvme$subsystem",
00:31:53.310      "trtype": "$TEST_TRANSPORT",
00:31:53.310      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:53.310      "adrfam": "ipv4",
00:31:53.310      "trsvcid": "$NVMF_PORT",
00:31:53.310      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:53.310      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:53.310      "hdgst": ${hdgst:-false},
00:31:53.310      "ddgst": ${ddgst:-false}
00:31:53.310    },
00:31:53.310    "method": "bdev_nvme_attach_controller"
00:31:53.310  }
00:31:53.310  EOF
00:31:53.310  )")
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:53.310     04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:53.310  {
00:31:53.310    "params": {
00:31:53.310      "name": "Nvme$subsystem",
00:31:53.310      "trtype": "$TEST_TRANSPORT",
00:31:53.310      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:53.310      "adrfam": "ipv4",
00:31:53.310      "trsvcid": "$NVMF_PORT",
00:31:53.310      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:53.310      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:53.310      "hdgst": ${hdgst:-false},
00:31:53.310      "ddgst": ${ddgst:-false}
00:31:53.310    },
00:31:53.310    "method": "bdev_nvme_attach_controller"
00:31:53.310  }
00:31:53.310  EOF
00:31:53.310  )")
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:31:53.310     04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:31:53.310     04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:31:53.310     04:22:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:53.310    "params": {
00:31:53.310      "name": "Nvme0",
00:31:53.310      "trtype": "tcp",
00:31:53.310      "traddr": "10.0.0.2",
00:31:53.310      "adrfam": "ipv4",
00:31:53.310      "trsvcid": "4420",
00:31:53.310      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:31:53.310      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:31:53.310      "hdgst": false,
00:31:53.310      "ddgst": false
00:31:53.310    },
00:31:53.310    "method": "bdev_nvme_attach_controller"
00:31:53.310  },{
00:31:53.310    "params": {
00:31:53.310      "name": "Nvme1",
00:31:53.310      "trtype": "tcp",
00:31:53.310      "traddr": "10.0.0.2",
00:31:53.310      "adrfam": "ipv4",
00:31:53.310      "trsvcid": "4420",
00:31:53.310      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:31:53.310      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:31:53.310      "hdgst": false,
00:31:53.310      "ddgst": false
00:31:53.310    },
00:31:53.310    "method": "bdev_nvme_attach_controller"
00:31:53.310  }'
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:31:53.310    04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:31:53.310   04:22:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:53.310  filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:31:53.310  ...
00:31:53.310  filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:31:53.310  ...
00:31:53.310  fio-3.35
00:31:53.310  Starting 4 threads
00:31:58.593  
00:31:58.593  filename0: (groupid=0, jobs=1): err= 0: pid=411968: Mon Dec  9 04:22:26 2024
00:31:58.593    read: IOPS=1886, BW=14.7MiB/s (15.5MB/s)(73.7MiB/5004msec)
00:31:58.593      slat (nsec): min=4210, max=68348, avg=15626.06, stdev=8915.93
00:31:58.593      clat (usec): min=1020, max=7546, avg=4190.13, stdev=389.67
00:31:58.593       lat (usec): min=1034, max=7562, avg=4205.76, stdev=390.29
00:31:58.593      clat percentiles (usec):
00:31:58.593       |  1.00th=[ 3097],  5.00th=[ 3654], 10.00th=[ 3851], 20.00th=[ 4047],
00:31:58.593       | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228],
00:31:58.593       | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621],
00:31:58.593       | 99.00th=[ 5538], 99.50th=[ 6194], 99.90th=[ 6980], 99.95th=[ 7308],
00:31:58.593       | 99.99th=[ 7570]
00:31:58.593     bw (  KiB/s): min=14768, max=15360, per=25.21%, avg=15088.00, stdev=185.37, samples=10
00:31:58.593     iops        : min= 1846, max= 1920, avg=1886.00, stdev=23.17, samples=10
00:31:58.593    lat (msec)   : 2=0.31%, 4=15.51%, 10=84.18%
00:31:58.593    cpu          : usr=96.12%, sys=3.38%, ctx=6, majf=0, minf=23
00:31:58.593    IO depths    : 1=0.7%, 2=12.5%, 4=59.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:58.593       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.593       complete  : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.593       issued rwts: total=9438,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:58.593       latency   : target=0, window=0, percentile=100.00%, depth=8
00:31:58.593  filename0: (groupid=0, jobs=1): err= 0: pid=411970: Mon Dec  9 04:22:26 2024
00:31:58.593    read: IOPS=1874, BW=14.6MiB/s (15.4MB/s)(73.3MiB/5003msec)
00:31:58.593      slat (nsec): min=4259, max=68898, avg=20965.69, stdev=10376.47
00:31:58.593      clat (usec): min=757, max=8193, avg=4181.05, stdev=562.68
00:31:58.593       lat (usec): min=772, max=8204, avg=4202.01, stdev=563.15
00:31:58.593      clat percentiles (usec):
00:31:58.593       |  1.00th=[ 1762],  5.00th=[ 3523], 10.00th=[ 3884], 20.00th=[ 4047],
00:31:58.593       | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228],
00:31:58.593       | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4752],
00:31:58.593       | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 7635],
00:31:58.593       | 99.99th=[ 8225]
00:31:58.593     bw (  KiB/s): min=14688, max=15232, per=25.04%, avg=14988.44, stdev=185.00, samples=9
00:31:58.593     iops        : min= 1836, max= 1904, avg=1873.56, stdev=23.13, samples=9
00:31:58.593    lat (usec)   : 1000=0.14%
00:31:58.593    lat (msec)   : 2=0.98%, 4=13.79%, 10=85.09%
00:31:58.593    cpu          : usr=96.26%, sys=3.22%, ctx=6, majf=0, minf=39
00:31:58.593    IO depths    : 1=0.9%, 2=23.0%, 4=51.4%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:58.594       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.594       complete  : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.594       issued rwts: total=9378,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:58.594       latency   : target=0, window=0, percentile=100.00%, depth=8
00:31:58.594  filename1: (groupid=0, jobs=1): err= 0: pid=411971: Mon Dec  9 04:22:26 2024
00:31:58.594    read: IOPS=1867, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5004msec)
00:31:58.594      slat (nsec): min=4175, max=71785, avg=19357.64, stdev=10277.17
00:31:58.594      clat (usec): min=980, max=7828, avg=4210.27, stdev=466.69
00:31:58.594       lat (usec): min=994, max=7842, avg=4229.63, stdev=466.82
00:31:58.594      clat percentiles (usec):
00:31:58.594       |  1.00th=[ 2868],  5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4047],
00:31:58.594       | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228],
00:31:58.594       | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4686],
00:31:58.594       | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7439],
00:31:58.594       | 99.99th=[ 7832]
00:31:58.594     bw (  KiB/s): min=14624, max=15262, per=24.96%, avg=14943.80, stdev=199.34, samples=10
00:31:58.594     iops        : min= 1828, max= 1907, avg=1867.90, stdev=24.79, samples=10
00:31:58.594    lat (usec)   : 1000=0.02%
00:31:58.594    lat (msec)   : 2=0.37%, 4=14.16%, 10=85.45%
00:31:58.594    cpu          : usr=95.10%, sys=4.30%, ctx=6, majf=0, minf=68
00:31:58.594    IO depths    : 1=0.8%, 2=20.3%, 4=53.5%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:58.594       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.594       complete  : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.594       issued rwts: total=9346,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:58.594       latency   : target=0, window=0, percentile=100.00%, depth=8
00:31:58.594  filename1: (groupid=0, jobs=1): err= 0: pid=411972: Mon Dec  9 04:22:26 2024
00:31:58.594    read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5003msec)
00:31:58.594      slat (nsec): min=3872, max=70588, avg=19686.34, stdev=8249.75
00:31:58.594      clat (usec): min=723, max=7671, avg=4241.30, stdev=622.82
00:31:58.594       lat (usec): min=737, max=7703, avg=4260.99, stdev=622.66
00:31:58.594      clat percentiles (usec):
00:31:58.594       |  1.00th=[ 1975],  5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4047],
00:31:58.594       | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228],
00:31:58.594       | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5145],
00:31:58.594       | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 7504], 99.95th=[ 7570],
00:31:58.594       | 99.99th=[ 7701]
00:31:58.594     bw (  KiB/s): min=14368, max=15216, per=24.76%, avg=14819.56, stdev=284.40, samples=9
00:31:58.594     iops        : min= 1796, max= 1902, avg=1852.44, stdev=35.55, samples=9
00:31:58.594    lat (usec)   : 750=0.01%, 1000=0.14%
00:31:58.594    lat (msec)   : 2=0.89%, 4=12.84%, 10=86.12%
00:31:58.594    cpu          : usr=94.40%, sys=4.66%, ctx=44, majf=0, minf=35
00:31:58.594    IO depths    : 1=0.4%, 2=18.7%, 4=55.1%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:58.594       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.594       complete  : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:58.594       issued rwts: total=9278,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:58.594       latency   : target=0, window=0, percentile=100.00%, depth=8
00:31:58.594  
00:31:58.594  Run status group 0 (all jobs):
00:31:58.594     READ: bw=58.5MiB/s (61.3MB/s), 14.5MiB/s-14.7MiB/s (15.2MB/s-15.5MB/s), io=293MiB (307MB), run=5003-5004msec
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594  
00:31:58.594  real	0m24.126s
00:31:58.594  user	4m36.045s
00:31:58.594  sys	0m6.250s
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:31:58.594  ************************************
00:31:58.594  END TEST fio_dif_rand_params
00:31:58.594  ************************************
00:31:58.594   04:22:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest
00:31:58.594   04:22:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:31:58.594   04:22:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:31:58.594  ************************************
00:31:58.594  START TEST fio_dif_digest
00:31:58.594  ************************************
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@"
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:31:58.594  bdev_null0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:31:58.594  [2024-12-09 04:22:26.906012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62
00:31:58.594    04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0
00:31:58.594    04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:31:58.594    04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=()
00:31:58.594    04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config
00:31:58.594    04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:58.594    04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:58.594  {
00:31:58.594    "params": {
00:31:58.594      "name": "Nvme$subsystem",
00:31:58.594      "trtype": "$TEST_TRANSPORT",
00:31:58.594      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:58.594      "adrfam": "ipv4",
00:31:58.594      "trsvcid": "$NVMF_PORT",
00:31:58.594      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:58.594      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:58.594      "hdgst": ${hdgst:-false},
00:31:58.594      "ddgst": ${ddgst:-false}
00:31:58.594    },
00:31:58.594    "method": "bdev_nvme_attach_controller"
00:31:58.594  }
00:31:58.594  EOF
00:31:58.594  )")
00:31:58.594   04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib=
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:58.595     04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 ))
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files ))
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq .
00:31:58.595     04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=,
00:31:58.595     04:22:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:58.595    "params": {
00:31:58.595      "name": "Nvme0",
00:31:58.595      "trtype": "tcp",
00:31:58.595      "traddr": "10.0.0.2",
00:31:58.595      "adrfam": "ipv4",
00:31:58.595      "trsvcid": "4420",
00:31:58.595      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:31:58.595      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:31:58.595      "hdgst": true,
00:31:58.595      "ddgst": true
00:31:58.595    },
00:31:58.595    "method": "bdev_nvme_attach_controller"
00:31:58.595  }'
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:31:58.595    04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:31:58.595   04:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:31:58.853  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:31:58.853  ...
00:31:58.853  fio-3.35
00:31:58.853  Starting 3 threads
00:32:11.054  
00:32:11.054  filename0: (groupid=0, jobs=1): err= 0: pid=412754: Mon Dec  9 04:22:37 2024
00:32:11.054    read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10047msec)
00:32:11.054      slat (nsec): min=4423, max=27789, avg=14970.92, stdev=1518.29
00:32:11.054      clat (usec): min=12077, max=53153, avg=15137.16, stdev=1495.51
00:32:11.054       lat (usec): min=12091, max=53168, avg=15152.13, stdev=1495.52
00:32:11.054      clat percentiles (usec):
00:32:11.054       |  1.00th=[12911],  5.00th=[13566], 10.00th=[13960], 20.00th=[14353],
00:32:11.054       | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270],
00:32:11.054       | 70.00th=[15533], 80.00th=[15926], 90.00th=[16319], 95.00th=[16581],
00:32:11.054       | 99.00th=[17433], 99.50th=[17957], 99.90th=[49021], 99.95th=[53216],
00:32:11.054       | 99.99th=[53216]
00:32:11.054     bw (  KiB/s): min=24832, max=25856, per=32.49%, avg=25384.90, stdev=220.82, samples=20
00:32:11.054     iops        : min=  194, max=  202, avg=198.30, stdev= 1.75, samples=20
00:32:11.054    lat (msec)   : 20=99.75%, 50=0.20%, 100=0.05%
00:32:11.054    cpu          : usr=94.22%, sys=5.28%, ctx=20, majf=0, minf=123
00:32:11.054    IO depths    : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:32:11.054       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:11.054       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:11.054       issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:32:11.054       latency   : target=0, window=0, percentile=100.00%, depth=3
00:32:11.054  filename0: (groupid=0, jobs=1): err= 0: pid=412755: Mon Dec  9 04:22:37 2024
00:32:11.054    read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(270MiB/10010msec)
00:32:11.054      slat (nsec): min=4331, max=51101, avg=15207.61, stdev=2195.13
00:32:11.054      clat (usec): min=10498, max=20452, avg=13908.07, stdev=912.65
00:32:11.054       lat (usec): min=10513, max=20466, avg=13923.28, stdev=912.59
00:32:11.054      clat percentiles (usec):
00:32:11.054       |  1.00th=[11731],  5.00th=[12387], 10.00th=[12780], 20.00th=[13173],
00:32:11.054       | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222],
00:32:11.054       | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15270],
00:32:11.054       | 99.00th=[15926], 99.50th=[16319], 99.90th=[20317], 99.95th=[20317],
00:32:11.054       | 99.99th=[20579]
00:32:11.054     bw (  KiB/s): min=26112, max=28160, per=35.28%, avg=27558.40, stdev=471.86, samples=20
00:32:11.054     iops        : min=  204, max=  220, avg=215.30, stdev= 3.69, samples=20
00:32:11.054    lat (msec)   : 20=99.86%, 50=0.14%
00:32:11.054    cpu          : usr=93.37%, sys=5.97%, ctx=96, majf=0, minf=215
00:32:11.054    IO depths    : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:32:11.054       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:11.054       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:11.054       issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:32:11.054       latency   : target=0, window=0, percentile=100.00%, depth=3
00:32:11.054  filename0: (groupid=0, jobs=1): err= 0: pid=412756: Mon Dec  9 04:22:37 2024
00:32:11.054    read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10046msec)
00:32:11.054      slat (nsec): min=4700, max=33493, avg=15129.47, stdev=1371.99
00:32:11.054      clat (usec): min=11774, max=52745, avg=15105.08, stdev=1557.26
00:32:11.054       lat (usec): min=11789, max=52760, avg=15120.21, stdev=1557.22
00:32:11.054      clat percentiles (usec):
00:32:11.054       |  1.00th=[12911],  5.00th=[13566], 10.00th=[13829], 20.00th=[14222],
00:32:11.054       | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270],
00:32:11.054       | 70.00th=[15533], 80.00th=[15926], 90.00th=[16319], 95.00th=[16909],
00:32:11.054       | 99.00th=[17957], 99.50th=[18220], 99.90th=[49021], 99.95th=[52691],
00:32:11.054       | 99.99th=[52691]
00:32:11.054     bw (  KiB/s): min=24576, max=26368, per=32.57%, avg=25446.40, stdev=514.69, samples=20
00:32:11.055     iops        : min=  192, max=  206, avg=198.80, stdev= 4.02, samples=20
00:32:11.055    lat (msec)   : 20=99.75%, 50=0.20%, 100=0.05%
00:32:11.055    cpu          : usr=94.33%, sys=5.18%, ctx=24, majf=0, minf=92
00:32:11.055    IO depths    : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:32:11.055       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:11.055       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:11.055       issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:32:11.055       latency   : target=0, window=0, percentile=100.00%, depth=3
00:32:11.055  
00:32:11.055  Run status group 0 (all jobs):
00:32:11.055     READ: bw=76.3MiB/s (80.0MB/s), 24.7MiB/s-26.9MiB/s (25.9MB/s-28.2MB/s), io=767MiB (804MB), run=10010-10047msec
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@"
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:11.055  
00:32:11.055  real	0m11.224s
00:32:11.055  user	0m29.646s
00:32:11.055  sys	0m1.913s
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:11.055   04:22:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:32:11.055  ************************************
00:32:11.055  END TEST fio_dif_digest
00:32:11.055  ************************************
00:32:11.055   04:22:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT
00:32:11.055   04:22:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@121 -- # sync
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@124 -- # set +e
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:11.055  rmmod nvme_tcp
00:32:11.055  rmmod nvme_fabrics
00:32:11.055  rmmod nvme_keyring
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@128 -- # set -e
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@129 -- # return 0
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 406696 ']'
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 406696
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 406696 ']'
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 406696
00:32:11.055    04:22:38 nvmf_dif -- common/autotest_common.sh@959 -- # uname
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:11.055    04:22:38 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406696
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406696'
00:32:11.055  killing process with pid 406696
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@973 -- # kill 406696
00:32:11.055   04:22:38 nvmf_dif -- common/autotest_common.sh@978 -- # wait 406696
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']'
00:32:11.055   04:22:38 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:32:11.055  Waiting for block devices as requested
00:32:11.055  0000:88:00.0 (8086 0a54): vfio-pci -> nvme
00:32:11.313  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:32:11.313  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:32:11.571  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:32:11.571  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:32:11.571  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:32:11.571  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:32:11.828  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:32:11.828  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:32:11.828  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:32:11.828  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:32:12.085  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:32:12.085  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:32:12.085  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:32:12.085  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:32:12.343  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:32:12.343  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@297 -- # iptr
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns
00:32:12.343   04:22:40 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:12.343   04:22:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:32:12.343    04:22:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:14.882   04:22:42 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:32:14.882  
00:32:14.882  real	1m7.231s
00:32:14.882  user	6m33.755s
00:32:14.882  sys	0m17.585s
00:32:14.882   04:22:42 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:14.882   04:22:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:32:14.882  ************************************
00:32:14.882  END TEST nvmf_dif
00:32:14.882  ************************************
00:32:14.882   04:22:42  -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh
00:32:14.882   04:22:42  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:32:14.882   04:22:42  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:14.882   04:22:42  -- common/autotest_common.sh@10 -- # set +x
00:32:14.882  ************************************
00:32:14.882  START TEST nvmf_abort_qd_sizes
00:32:14.882  ************************************
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh
00:32:14.882  * Looking for test storage...
00:32:14.882  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-:
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-:
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:14.882  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:14.882  		--rc genhtml_branch_coverage=1
00:32:14.882  		--rc genhtml_function_coverage=1
00:32:14.882  		--rc genhtml_legend=1
00:32:14.882  		--rc geninfo_all_blocks=1
00:32:14.882  		--rc geninfo_unexecuted_blocks=1
00:32:14.882  		
00:32:14.882  		'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:14.882  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:14.882  		--rc genhtml_branch_coverage=1
00:32:14.882  		--rc genhtml_function_coverage=1
00:32:14.882  		--rc genhtml_legend=1
00:32:14.882  		--rc geninfo_all_blocks=1
00:32:14.882  		--rc geninfo_unexecuted_blocks=1
00:32:14.882  		
00:32:14.882  		'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:14.882  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:14.882  		--rc genhtml_branch_coverage=1
00:32:14.882  		--rc genhtml_function_coverage=1
00:32:14.882  		--rc genhtml_legend=1
00:32:14.882  		--rc geninfo_all_blocks=1
00:32:14.882  		--rc geninfo_unexecuted_blocks=1
00:32:14.882  		
00:32:14.882  		'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:14.882  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:14.882  		--rc genhtml_branch_coverage=1
00:32:14.882  		--rc genhtml_function_coverage=1
00:32:14.882  		--rc genhtml_legend=1
00:32:14.882  		--rc geninfo_all_blocks=1
00:32:14.882  		--rc geninfo_unexecuted_blocks=1
00:32:14.882  		
00:32:14.882  		'
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:14.882     04:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:14.882      04:22:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:14.882      04:22:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:14.882      04:22:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:14.882      04:22:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH
00:32:14.882      04:22:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:32:14.882  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:14.882    04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:14.882   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:14.883   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:14.883   04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:32:14.883    04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:14.883   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:32:14.883   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:32:14.883   04:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable
00:32:14.883   04:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=()
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=()
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=()
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=()
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=()
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=()
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=()
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)'
00:32:16.791  Found 0000:0a:00.0 (0x8086 - 0x159b)
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)'
00:32:16.791  Found 0000:0a:00.1 (0x8086 - 0x159b)
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0'
00:32:16.791  Found net devices under 0000:0a:00.0: cvl_0_0
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1'
00:32:16.791  Found net devices under 0000:0a:00.1: cvl_0_1
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:32:16.791   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:32:16.792   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:16.792   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:32:16.792   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:32:17.052  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:17.052  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms
00:32:17.052  
00:32:17.052  --- 10.0.0.2 ping statistics ---
00:32:17.052  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:17.052  rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms
00:32:17.052   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:32:17.052  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:17.052  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms
00:32:17.052  
00:32:17.052  --- 10.0.0.1 ping statistics ---
00:32:17.052  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:17.052  rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms
00:32:17.053   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:17.053   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0
00:32:17.053   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']'
00:32:17.053   04:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:32:17.989  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:32:17.989  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:32:18.247  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:32:18.247  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:32:18.247  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:32:18.247  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:32:18.247  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:32:18.247  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:32:18.247  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:32:19.185  0000:88:00.0 (8086 0a54): nvme -> vfio-pci
00:32:19.185   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:19.185   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:19.185   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:19.185   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:19.185   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:19.185   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=417679
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 417679
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 417679 ']'
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:19.443  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:19.443   04:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:32:19.443  [2024-12-09 04:22:47.819108] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:32:19.443  [2024-12-09 04:22:47.819183] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:19.443  [2024-12-09 04:22:47.889601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:32:19.443  [2024-12-09 04:22:47.947874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:19.443  [2024-12-09 04:22:47.947927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:19.443  [2024-12-09 04:22:47.947956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:19.443  [2024-12-09 04:22:47.947967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:19.443  [2024-12-09 04:22:47.947978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:19.443  [2024-12-09 04:22:47.949472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:19.443  [2024-12-09 04:22:47.949534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:32:19.443  [2024-12-09 04:22:47.949537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:32:19.443  [2024-12-09 04:22:47.949511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]]
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]})
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]]
00:32:19.701     04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 ))
00:32:19.701    04:22:48 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 ))
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:19.701   04:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:32:19.701  ************************************
00:32:19.701  START TEST spdk_target_abort
00:32:19.701  ************************************
00:32:19.701   04:22:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target
00:32:19.701   04:22:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target
00:32:19.701   04:22:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target
00:32:19.701   04:22:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:19.701   04:22:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:22.979  spdk_targetn1
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:22.979  [2024-12-09 04:22:50.977130] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:22.979   04:22:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:22.979  [2024-12-09 04:22:51.018461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2'
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:22.979   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:32:22.980   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:22.980   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:22.980   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:32:22.980   04:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:26.253  Initializing NVMe Controllers
00:32:26.253  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn
00:32:26.253  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:32:26.253  Initialization complete. Launching workers.
00:32:26.253  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12284, failed: 0
00:32:26.253  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1198, failed to submit 11086
00:32:26.253  	 success 740, unsuccessful 458, failed 0
00:32:26.253   04:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:32:26.253   04:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:29.529  Initializing NVMe Controllers
00:32:29.529  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn
00:32:29.529  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:32:29.529  Initialization complete. Launching workers.
00:32:29.529  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8649, failed: 0
00:32:29.529  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7431
00:32:29.529  	 success 315, unsuccessful 903, failed 0
00:32:29.529   04:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:32:29.529   04:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:32.807  Initializing NVMe Controllers
00:32:32.807  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn
00:32:32.807  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:32:32.807  Initialization complete. Launching workers.
00:32:32.807  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31184, failed: 0
00:32:32.807  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2638, failed to submit 28546
00:32:32.807  	 success 520, unsuccessful 2118, failed 0
00:32:32.807   04:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn
00:32:32.807   04:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:32.807   04:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:32.807   04:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:32.807   04:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target
00:32:32.807   04:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:32.807   04:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 417679
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 417679 ']'
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 417679
00:32:33.791    04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:33.791    04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 417679
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 417679'
00:32:33.791  killing process with pid 417679
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 417679
00:32:33.791   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 417679
00:32:34.050  
00:32:34.050  real	0m14.388s
00:32:34.050  user	0m54.446s
00:32:34.050  sys	0m2.757s
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:34.050  ************************************
00:32:34.050  END TEST spdk_target_abort
00:32:34.050  ************************************
00:32:34.050   04:23:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target
00:32:34.050   04:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:32:34.050   04:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:34.050   04:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:32:34.050  ************************************
00:32:34.050  START TEST kernel_target_abort
00:32:34.050  ************************************
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=()
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:32:34.050    04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:32:34.050   04:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:32:35.423  Waiting for block devices as requested
00:32:35.423  0000:88:00.0 (8086 0a54): vfio-pci -> nvme
00:32:35.423  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:32:35.680  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:32:35.680  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:32:35.680  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:32:35.938  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:32:35.938  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:32:35.938  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:32:35.938  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:32:36.197  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:32:36.197  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:32:36.197  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:32:36.197  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:32:36.456  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:32:36.456  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:32:36.456  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:32:36.714  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:32:36.714  No valid GPT data, bailing
00:32:36.714    04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt=
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
00:32:36.714   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420
00:32:36.971  
00:32:36.971  Discovery Log Number of Records 2, Generation counter 2
00:32:36.971  =====Discovery Log Entry 0======
00:32:36.971  trtype:  tcp
00:32:36.971  adrfam:  ipv4
00:32:36.971  subtype: current discovery subsystem
00:32:36.971  treq:    not specified, sq flow control disable supported
00:32:36.971  portid:  1
00:32:36.971  trsvcid: 4420
00:32:36.971  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:32:36.971  traddr:  10.0.0.1
00:32:36.971  eflags:  none
00:32:36.971  sectype: none
00:32:36.971  =====Discovery Log Entry 1======
00:32:36.971  trtype:  tcp
00:32:36.971  adrfam:  ipv4
00:32:36.971  subtype: nvme subsystem
00:32:36.971  treq:    not specified, sq flow control disable supported
00:32:36.971  portid:  1
00:32:36.971  trsvcid: 4420
00:32:36.971  subnqn:  nqn.2016-06.io.spdk:testnqn
00:32:36.971  traddr:  10.0.0.1
00:32:36.971  eflags:  none
00:32:36.971  sectype: none
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1'
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420'
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:32:36.971   04:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:40.250  Initializing NVMe Controllers
00:32:40.250  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:32:40.250  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:32:40.250  Initialization complete. Launching workers.
00:32:40.250  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57193, failed: 0
00:32:40.250  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57193, failed to submit 0
00:32:40.250  	 success 0, unsuccessful 57193, failed 0
00:32:40.250   04:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:32:40.250   04:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:43.532  Initializing NVMe Controllers
00:32:43.532  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:32:43.532  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:32:43.532  Initialization complete. Launching workers.
00:32:43.532  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99341, failed: 0
00:32:43.532  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25038, failed to submit 74303
00:32:43.532  	 success 0, unsuccessful 25038, failed 0
00:32:43.532   04:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:32:43.532   04:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:32:46.810  Initializing NVMe Controllers
00:32:46.810  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:32:46.810  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:32:46.810  Initialization complete. Launching workers.
00:32:46.810  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95430, failed: 0
00:32:46.810  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23862, failed to submit 71568
00:32:46.810  	 success 0, unsuccessful 23862, failed 0
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]]
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:32:46.810   04:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:32:47.742  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:32:47.742  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:32:47.742  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:32:47.742  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:32:47.742  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:32:47.742  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:32:47.742  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:32:47.742  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:32:47.742  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:32:48.677  0000:88:00.0 (8086 0a54): nvme -> vfio-pci
00:32:48.677  
00:32:48.677  real	0m14.588s
00:32:48.677  user	0m6.818s
00:32:48.677  sys	0m3.250s
00:32:48.677   04:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:48.677   04:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x
00:32:48.677  ************************************
00:32:48.677  END TEST kernel_target_abort
00:32:48.677  ************************************
00:32:48.677   04:23:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:32:48.677   04:23:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini
00:32:48.677   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:48.677   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync
00:32:48.677   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:48.677   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e
00:32:48.677   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:48.678  rmmod nvme_tcp
00:32:48.678  rmmod nvme_fabrics
00:32:48.678  rmmod nvme_keyring
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 417679 ']'
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 417679
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 417679 ']'
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 417679
00:32:48.678  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (417679) - No such process
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 417679 is not found'
00:32:48.678  Process with pid 417679 is not found
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']'
00:32:48.678   04:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:32:50.054  Waiting for block devices as requested
00:32:50.054  0000:88:00.0 (8086 0a54): vfio-pci -> nvme
00:32:50.313  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:32:50.313  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:32:50.313  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:32:50.572  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:32:50.572  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:32:50.572  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:32:50.572  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:32:50.831  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:32:50.831  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:32:50.831  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:32:50.831  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:32:51.090  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:32:51.090  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:32:51.090  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:32:51.090  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:32:51.370  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:32:51.370   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:51.370   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:51.370   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr
00:32:51.370   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save
00:32:51.370   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:51.370   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore
00:32:51.371   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:51.371   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns
00:32:51.371   04:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:51.371   04:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:32:51.371    04:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:53.343   04:23:21 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:32:53.343  
00:32:53.343  real	0m38.893s
00:32:53.343  user	1m3.602s
00:32:53.343  sys	0m9.671s
00:32:53.343   04:23:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:53.343   04:23:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:32:53.343  ************************************
00:32:53.343  END TEST nvmf_abort_qd_sizes
00:32:53.343  ************************************
00:32:53.602   04:23:21  -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh
00:32:53.602   04:23:21  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:32:53.602   04:23:21  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:53.603   04:23:21  -- common/autotest_common.sh@10 -- # set +x
00:32:53.603  ************************************
00:32:53.603  START TEST keyring_file
00:32:53.603  ************************************
00:32:53.603   04:23:21 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh
00:32:53.603  * Looking for test storage...
00:32:53.603  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring
00:32:53.603    04:23:22 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:53.603     04:23:22 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:53.603     04:23:22 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version
00:32:53.603    04:23:22 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-:
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-:
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<'
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@345 -- # : 1
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@365 -- # decimal 1
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@353 -- # local d=1
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@355 -- # echo 1
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@366 -- # decimal 2
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@353 -- # local d=2
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:53.603     04:23:22 keyring_file -- scripts/common.sh@355 -- # echo 2
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:53.603    04:23:22 keyring_file -- scripts/common.sh@368 -- # return 0
00:32:53.603    04:23:22 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:53.603    04:23:22 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:53.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:53.603  		--rc genhtml_branch_coverage=1
00:32:53.603  		--rc genhtml_function_coverage=1
00:32:53.603  		--rc genhtml_legend=1
00:32:53.603  		--rc geninfo_all_blocks=1
00:32:53.603  		--rc geninfo_unexecuted_blocks=1
00:32:53.603  		
00:32:53.603  		'
00:32:53.603    04:23:22 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:53.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:53.603  		--rc genhtml_branch_coverage=1
00:32:53.603  		--rc genhtml_function_coverage=1
00:32:53.603  		--rc genhtml_legend=1
00:32:53.603  		--rc geninfo_all_blocks=1
00:32:53.603  		--rc geninfo_unexecuted_blocks=1
00:32:53.603  		
00:32:53.603  		'
00:32:53.603    04:23:22 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:53.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:53.603  		--rc genhtml_branch_coverage=1
00:32:53.603  		--rc genhtml_function_coverage=1
00:32:53.603  		--rc genhtml_legend=1
00:32:53.603  		--rc geninfo_all_blocks=1
00:32:53.603  		--rc geninfo_unexecuted_blocks=1
00:32:53.603  		
00:32:53.603  		'
00:32:53.603    04:23:22 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:53.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:53.603  		--rc genhtml_branch_coverage=1
00:32:53.603  		--rc genhtml_function_coverage=1
00:32:53.603  		--rc genhtml_legend=1
00:32:53.603  		--rc geninfo_all_blocks=1
00:32:53.603  		--rc geninfo_unexecuted_blocks=1
00:32:53.603  		
00:32:53.603  		'
00:32:53.603   04:23:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:32:53.603      04:23:22 keyring_file -- nvmf/common.sh@7 -- # uname -s
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:53.603      04:23:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:32:53.603      04:23:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob
00:32:53.603      04:23:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:53.603      04:23:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:53.603      04:23:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:53.603       04:23:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:53.603       04:23:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:53.603       04:23:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:53.603       04:23:22 keyring_file -- paths/export.sh@5 -- # export PATH
00:32:53.603       04:23:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@51 -- # : 0
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:32:53.603  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:53.603     04:23:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock
00:32:53.603   04:23:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0
00:32:53.603   04:23:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:32:53.603   04:23:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff
00:32:53.603   04:23:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00
00:32:53.603   04:23:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT
00:32:53.603    04:23:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@17 -- # name=key0
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@17 -- # digest=0
00:32:53.603     04:23:22 keyring_file -- keyring/common.sh@18 -- # mktemp
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z4gh9PWS1I
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:32:53.603    04:23:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:32:53.603    04:23:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:32:53.603    04:23:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:32:53.603    04:23:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:32:53.603    04:23:22 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:32:53.603    04:23:22 keyring_file -- nvmf/common.sh@733 -- # python -
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z4gh9PWS1I
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z4gh9PWS1I
00:32:53.603   04:23:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.z4gh9PWS1I
00:32:53.603    04:23:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@17 -- # name=key1
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00
00:32:53.603    04:23:22 keyring_file -- keyring/common.sh@17 -- # digest=0
00:32:53.603     04:23:22 keyring_file -- keyring/common.sh@18 -- # mktemp
00:32:53.861    04:23:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8RZLZsSLHb
00:32:53.861    04:23:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0
00:32:53.861    04:23:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0
00:32:53.861    04:23:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:32:53.861    04:23:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:32:53.861    04:23:22 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00
00:32:53.861    04:23:22 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:32:53.861    04:23:22 keyring_file -- nvmf/common.sh@733 -- # python -
00:32:53.861    04:23:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8RZLZsSLHb
00:32:53.861    04:23:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8RZLZsSLHb
00:32:53.861   04:23:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8RZLZsSLHb
00:32:53.861   04:23:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=423473
00:32:53.861   04:23:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:32:53.861   04:23:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 423473
00:32:53.861   04:23:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 423473 ']'
00:32:53.861   04:23:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:53.861   04:23:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:53.861   04:23:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:53.861  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:53.861   04:23:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:53.862   04:23:22 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:32:53.862  [2024-12-09 04:23:22.280685] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:32:53.862  [2024-12-09 04:23:22.280800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423473 ]
00:32:53.862  [2024-12-09 04:23:22.347259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:53.862  [2024-12-09 04:23:22.407504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:32:54.120   04:23:22 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:54.120   04:23:22 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:32:54.120   04:23:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd
00:32:54.120   04:23:22 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:54.120   04:23:22 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:32:54.120  [2024-12-09 04:23:22.675888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:54.378  null0
00:32:54.378  [2024-12-09 04:23:22.707945] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:32:54.378  [2024-12-09 04:23:22.708435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:54.378   04:23:22 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:54.378    04:23:22 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:32:54.378  [2024-12-09 04:23:22.731988] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists
00:32:54.378  request:
00:32:54.378  {
00:32:54.378  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:32:54.378  "secure_channel": false,
00:32:54.378  "listen_address": {
00:32:54.378  "trtype": "tcp",
00:32:54.378  "traddr": "127.0.0.1",
00:32:54.378  "trsvcid": "4420"
00:32:54.378  },
00:32:54.378  "method": "nvmf_subsystem_add_listener",
00:32:54.378  "req_id": 1
00:32:54.378  }
00:32:54.378  Got JSON-RPC error response
00:32:54.378  response:
00:32:54.378  {
00:32:54.378  "code": -32602,
00:32:54.378  "message": "Invalid parameters"
00:32:54.378  }
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:32:54.378   04:23:22 keyring_file -- keyring/file.sh@47 -- # bperfpid=423494
00:32:54.378   04:23:22 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z
00:32:54.378   04:23:22 keyring_file -- keyring/file.sh@49 -- # waitforlisten 423494 /var/tmp/bperf.sock
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 423494 ']'
00:32:54.378   04:23:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:32:54.379   04:23:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:54.379   04:23:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:32:54.379  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:32:54.379   04:23:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:54.379   04:23:22 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:32:54.379  [2024-12-09 04:23:22.781884] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:32:54.379  [2024-12-09 04:23:22.781967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423494 ]
00:32:54.379  [2024-12-09 04:23:22.851435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:54.379  [2024-12-09 04:23:22.912267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:54.637   04:23:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:54.637   04:23:23 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:32:54.637   04:23:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:32:54.637   04:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:32:54.895   04:23:23 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8RZLZsSLHb
00:32:54.895   04:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8RZLZsSLHb
00:32:55.153    04:23:23 keyring_file -- keyring/file.sh@52 -- # get_key key0
00:32:55.153    04:23:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path
00:32:55.153    04:23:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:55.153    04:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:55.153    04:23:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:32:55.411   04:23:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.z4gh9PWS1I == \/\t\m\p\/\t\m\p\.\z\4\g\h\9\P\W\S\1\I ]]
00:32:55.411    04:23:23 keyring_file -- keyring/file.sh@53 -- # get_key key1
00:32:55.411    04:23:23 keyring_file -- keyring/file.sh@53 -- # jq -r .path
00:32:55.411    04:23:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:55.411    04:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:55.411    04:23:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:32:55.669   04:23:24 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.8RZLZsSLHb == \/\t\m\p\/\t\m\p\.\8\R\Z\L\Z\s\S\L\H\b ]]
00:32:55.669    04:23:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0
00:32:55.669    04:23:24 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:32:55.669    04:23:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:55.669    04:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:55.669    04:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:32:55.669    04:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:55.927   04:23:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 ))
00:32:55.927    04:23:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1
00:32:55.927    04:23:24 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:32:55.927    04:23:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:55.927    04:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:55.927    04:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:32:55.927    04:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:56.185   04:23:24 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 ))
00:32:56.185   04:23:24 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:32:56.185   04:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:32:56.443  [2024-12-09 04:23:24.904988] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:32:56.443  nvme0n1
00:32:56.443    04:23:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0
00:32:56.443    04:23:24 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:32:56.443    04:23:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:56.443    04:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:56.443    04:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:56.443    04:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:32:57.019   04:23:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 ))
00:32:57.019    04:23:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1
00:32:57.019    04:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:32:57.019    04:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:57.019    04:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:57.019    04:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:57.019    04:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:32:57.019   04:23:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 ))
00:32:57.019   04:23:25 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:32:57.275  Running I/O for 1 seconds...
00:32:58.206      10415.00 IOPS,    40.68 MiB/s
00:32:58.206                                                                                                  Latency(us)
00:32:58.206  
[2024-12-09T03:23:26.782Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:32:58.206  Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096)
00:32:58.206  	 nvme0n1             :       1.01   10466.34      40.88       0.00     0.00   12191.86    5631.24   22719.15
00:32:58.206  
[2024-12-09T03:23:26.782Z]  ===================================================================================================================
00:32:58.206  
[2024-12-09T03:23:26.782Z]  Total                       :              10466.34      40.88       0.00     0.00   12191.86    5631.24   22719.15
00:32:58.206  {
00:32:58.206    "results": [
00:32:58.206      {
00:32:58.206        "job": "nvme0n1",
00:32:58.206        "core_mask": "0x2",
00:32:58.206        "workload": "randrw",
00:32:58.206        "percentage": 50,
00:32:58.206        "status": "finished",
00:32:58.206        "queue_depth": 128,
00:32:58.206        "io_size": 4096,
00:32:58.206        "runtime": 1.00742,
00:32:58.206        "iops": 10466.339758988306,
00:32:58.206        "mibps": 40.88413968354807,
00:32:58.206        "io_failed": 0,
00:32:58.206        "io_timeout": 0,
00:32:58.206        "avg_latency_us": 12191.861518574719,
00:32:58.206        "min_latency_us": 5631.241481481481,
00:32:58.206        "max_latency_us": 22719.146666666667
00:32:58.206      }
00:32:58.206    ],
00:32:58.206    "core_count": 1
00:32:58.206  }
00:32:58.206   04:23:26 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:32:58.206   04:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:32:58.464    04:23:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0
00:32:58.464    04:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:32:58.464    04:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:58.464    04:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:58.464    04:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:58.464    04:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:32:58.722   04:23:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 ))
00:32:58.722    04:23:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1
00:32:58.722    04:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:32:58.722    04:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:58.722    04:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:58.722    04:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:58.722    04:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:32:58.980   04:23:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 ))
00:32:58.980   04:23:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:32:58.980   04:23:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:32:58.980   04:23:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:32:58.980   04:23:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:32:58.980   04:23:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:58.980    04:23:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:32:58.980   04:23:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:58.980   04:23:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:32:58.980   04:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:32:59.238  [2024-12-09 04:23:27.775108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:32:59.238  [2024-12-09 04:23:27.775452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1badde0 (107): Transport endpoint is not connected
00:32:59.238  [2024-12-09 04:23:27.776444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1badde0 (9): Bad file descriptor
00:32:59.238  [2024-12-09 04:23:27.777443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state
00:32:59.238  [2024-12-09 04:23:27.777462] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1
00:32:59.238  [2024-12-09 04:23:27.777476] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted
00:32:59.238  [2024-12-09 04:23:27.777490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state.
00:32:59.238  request:
00:32:59.238  {
00:32:59.238    "name": "nvme0",
00:32:59.238    "trtype": "tcp",
00:32:59.238    "traddr": "127.0.0.1",
00:32:59.238    "adrfam": "ipv4",
00:32:59.238    "trsvcid": "4420",
00:32:59.238    "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:32:59.238    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:32:59.238    "prchk_reftag": false,
00:32:59.238    "prchk_guard": false,
00:32:59.238    "hdgst": false,
00:32:59.238    "ddgst": false,
00:32:59.238    "psk": "key1",
00:32:59.238    "allow_unrecognized_csi": false,
00:32:59.238    "method": "bdev_nvme_attach_controller",
00:32:59.238    "req_id": 1
00:32:59.238  }
00:32:59.238  Got JSON-RPC error response
00:32:59.238  response:
00:32:59.238  {
00:32:59.238    "code": -5,
00:32:59.238    "message": "Input/output error"
00:32:59.238  }
00:32:59.238   04:23:27 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:32:59.238   04:23:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:32:59.239   04:23:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:32:59.239   04:23:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:32:59.239    04:23:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0
00:32:59.239    04:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:32:59.239    04:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:59.239    04:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:59.239    04:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:59.239    04:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:32:59.496   04:23:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 ))
00:32:59.496    04:23:28 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1
00:32:59.496    04:23:28 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:32:59.496    04:23:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:32:59.496    04:23:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:32:59.496    04:23:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:32:59.496    04:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:32:59.754   04:23:28 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 ))
00:32:59.754   04:23:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0
00:32:59.754   04:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:33:00.321   04:23:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1
00:33:00.321   04:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1
00:33:00.321    04:23:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys
00:33:00.321    04:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:00.321    04:23:28 keyring_file -- keyring/file.sh@78 -- # jq length
00:33:00.579   04:23:29 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 ))
00:33:00.579   04:23:29 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.z4gh9PWS1I
00:33:00.579   04:23:29 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:33:00.579   04:23:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:33:00.579   04:23:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:33:00.579   04:23:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:33:00.579   04:23:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:00.579    04:23:29 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:33:00.579   04:23:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:00.579   04:23:29 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:33:00.579   04:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:33:00.837  [2024-12-09 04:23:29.382411] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z4gh9PWS1I': 0100660
00:33:00.837  [2024-12-09 04:23:29.382445] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:33:00.837  request:
00:33:00.837  {
00:33:00.837    "name": "key0",
00:33:00.837    "path": "/tmp/tmp.z4gh9PWS1I",
00:33:00.837    "method": "keyring_file_add_key",
00:33:00.837    "req_id": 1
00:33:00.837  }
00:33:00.837  Got JSON-RPC error response
00:33:00.837  response:
00:33:00.837  {
00:33:00.837    "code": -1,
00:33:00.837    "message": "Operation not permitted"
00:33:00.837  }
00:33:00.837   04:23:29 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:33:00.837   04:23:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:33:00.837   04:23:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:33:00.837   04:23:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:33:00.837   04:23:29 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.z4gh9PWS1I
00:33:00.837   04:23:29 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:33:00.837   04:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z4gh9PWS1I
00:33:01.404   04:23:29 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.z4gh9PWS1I
00:33:01.404    04:23:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0
00:33:01.404    04:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:33:01.404    04:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:33:01.404    04:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:33:01.404    04:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:33:01.404    04:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:01.404   04:23:29 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 ))
00:33:01.404   04:23:29 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:01.404   04:23:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:33:01.404   04:23:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:01.404   04:23:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:33:01.404   04:23:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:01.404    04:23:29 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:33:01.404   04:23:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:01.404   04:23:29 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:01.404   04:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:01.661  [2024-12-09 04:23:30.224756] keyring.c:  31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.z4gh9PWS1I': No such file or directory
00:33:01.662  [2024-12-09 04:23:30.224807] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory
00:33:01.662  [2024-12-09 04:23:30.224844] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1
00:33:01.662  [2024-12-09 04:23:30.224857] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device
00:33:01.662  [2024-12-09 04:23:30.224869] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:33:01.662  [2024-12-09 04:23:30.224880] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1)
00:33:01.662  request:
00:33:01.662  {
00:33:01.662    "name": "nvme0",
00:33:01.662    "trtype": "tcp",
00:33:01.662    "traddr": "127.0.0.1",
00:33:01.662    "adrfam": "ipv4",
00:33:01.662    "trsvcid": "4420",
00:33:01.662    "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:01.662    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:33:01.662    "prchk_reftag": false,
00:33:01.662    "prchk_guard": false,
00:33:01.662    "hdgst": false,
00:33:01.662    "ddgst": false,
00:33:01.662    "psk": "key0",
00:33:01.662    "allow_unrecognized_csi": false,
00:33:01.662    "method": "bdev_nvme_attach_controller",
00:33:01.662    "req_id": 1
00:33:01.662  }
00:33:01.662  Got JSON-RPC error response
00:33:01.662  response:
00:33:01.662  {
00:33:01.662    "code": -19,
00:33:01.662    "message": "No such device"
00:33:01.662  }
00:33:01.918   04:23:30 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:33:01.918   04:23:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:33:01.918   04:23:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:33:01.918   04:23:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:33:01.918   04:23:30 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0
00:33:01.918   04:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:33:02.175    04:23:30 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@17 -- # name=key0
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@17 -- # digest=0
00:33:02.175     04:23:30 keyring_file -- keyring/common.sh@18 -- # mktemp
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.E1Trbdn2qB
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:33:02.175    04:23:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:33:02.175    04:23:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:33:02.175    04:23:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:33:02.175    04:23:30 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:33:02.175    04:23:30 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:33:02.175    04:23:30 keyring_file -- nvmf/common.sh@733 -- # python -
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.E1Trbdn2qB
00:33:02.175    04:23:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.E1Trbdn2qB
00:33:02.175   04:23:30 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.E1Trbdn2qB
00:33:02.175   04:23:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.E1Trbdn2qB
00:33:02.175   04:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.E1Trbdn2qB
00:33:02.431   04:23:30 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:02.431   04:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:02.689  nvme0n1
00:33:02.689    04:23:31 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0
00:33:02.689    04:23:31 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:33:02.689    04:23:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:33:02.689    04:23:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:33:02.689    04:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:02.689    04:23:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:33:02.946   04:23:31 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 ))
00:33:02.946   04:23:31 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0
00:33:02.946   04:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:33:03.204    04:23:31 keyring_file -- keyring/file.sh@102 -- # get_key key0
00:33:03.204    04:23:31 keyring_file -- keyring/file.sh@102 -- # jq -r .removed
00:33:03.204    04:23:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:33:03.204    04:23:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:33:03.204    04:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:03.461   04:23:32 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]]
00:33:03.461    04:23:32 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0
00:33:03.461    04:23:32 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:33:03.461    04:23:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:33:03.461    04:23:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:33:03.461    04:23:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:33:03.461    04:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:04.025   04:23:32 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 ))
00:33:04.025   04:23:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:33:04.025   04:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:33:04.025    04:23:32 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys
00:33:04.025    04:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:04.025    04:23:32 keyring_file -- keyring/file.sh@105 -- # jq length
00:33:04.282   04:23:32 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 ))
00:33:04.282   04:23:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.E1Trbdn2qB
00:33:04.282   04:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.E1Trbdn2qB
00:33:04.539   04:23:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8RZLZsSLHb
00:33:04.539   04:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8RZLZsSLHb
00:33:05.103   04:23:33 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:05.103   04:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:33:05.360  nvme0n1
00:33:05.360    04:23:33 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config
00:33:05.360    04:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config
00:33:05.617   04:23:34 keyring_file -- keyring/file.sh@113 -- # config='{
00:33:05.617    "subsystems": [
00:33:05.617      {
00:33:05.617        "subsystem": "keyring",
00:33:05.617        "config": [
00:33:05.617          {
00:33:05.617            "method": "keyring_file_add_key",
00:33:05.617            "params": {
00:33:05.617              "name": "key0",
00:33:05.617              "path": "/tmp/tmp.E1Trbdn2qB"
00:33:05.617            }
00:33:05.617          },
00:33:05.617          {
00:33:05.617            "method": "keyring_file_add_key",
00:33:05.617            "params": {
00:33:05.617              "name": "key1",
00:33:05.617              "path": "/tmp/tmp.8RZLZsSLHb"
00:33:05.617            }
00:33:05.617          }
00:33:05.617        ]
00:33:05.618      },
00:33:05.618      {
00:33:05.618        "subsystem": "iobuf",
00:33:05.618        "config": [
00:33:05.618          {
00:33:05.618            "method": "iobuf_set_options",
00:33:05.618            "params": {
00:33:05.618              "small_pool_count": 8192,
00:33:05.618              "large_pool_count": 1024,
00:33:05.618              "small_bufsize": 8192,
00:33:05.618              "large_bufsize": 135168,
00:33:05.618              "enable_numa": false
00:33:05.618            }
00:33:05.618          }
00:33:05.618        ]
00:33:05.618      },
00:33:05.618      {
00:33:05.618        "subsystem": "sock",
00:33:05.618        "config": [
00:33:05.618          {
00:33:05.618            "method": "sock_set_default_impl",
00:33:05.618            "params": {
00:33:05.618              "impl_name": "posix"
00:33:05.618            }
00:33:05.618          },
00:33:05.618          {
00:33:05.618            "method": "sock_impl_set_options",
00:33:05.618            "params": {
00:33:05.618              "impl_name": "ssl",
00:33:05.618              "recv_buf_size": 4096,
00:33:05.618              "send_buf_size": 4096,
00:33:05.618              "enable_recv_pipe": true,
00:33:05.618              "enable_quickack": false,
00:33:05.618              "enable_placement_id": 0,
00:33:05.618              "enable_zerocopy_send_server": true,
00:33:05.618              "enable_zerocopy_send_client": false,
00:33:05.618              "zerocopy_threshold": 0,
00:33:05.618              "tls_version": 0,
00:33:05.618              "enable_ktls": false
00:33:05.618            }
00:33:05.618          },
00:33:05.618          {
00:33:05.618            "method": "sock_impl_set_options",
00:33:05.618            "params": {
00:33:05.618              "impl_name": "posix",
00:33:05.618              "recv_buf_size": 2097152,
00:33:05.618              "send_buf_size": 2097152,
00:33:05.618              "enable_recv_pipe": true,
00:33:05.618              "enable_quickack": false,
00:33:05.618              "enable_placement_id": 0,
00:33:05.618              "enable_zerocopy_send_server": true,
00:33:05.618              "enable_zerocopy_send_client": false,
00:33:05.618              "zerocopy_threshold": 0,
00:33:05.618              "tls_version": 0,
00:33:05.618              "enable_ktls": false
00:33:05.618            }
00:33:05.618          }
00:33:05.618        ]
00:33:05.618      },
00:33:05.618      {
00:33:05.618        "subsystem": "vmd",
00:33:05.618        "config": []
00:33:05.618      },
00:33:05.618      {
00:33:05.618        "subsystem": "accel",
00:33:05.618        "config": [
00:33:05.618          {
00:33:05.618            "method": "accel_set_options",
00:33:05.618            "params": {
00:33:05.618              "small_cache_size": 128,
00:33:05.618              "large_cache_size": 16,
00:33:05.618              "task_count": 2048,
00:33:05.618              "sequence_count": 2048,
00:33:05.618              "buf_count": 2048
00:33:05.618            }
00:33:05.618          }
00:33:05.618        ]
00:33:05.618      },
00:33:05.618      {
00:33:05.618        "subsystem": "bdev",
00:33:05.618        "config": [
00:33:05.618          {
00:33:05.618            "method": "bdev_set_options",
00:33:05.618            "params": {
00:33:05.618              "bdev_io_pool_size": 65535,
00:33:05.618              "bdev_io_cache_size": 256,
00:33:05.618              "bdev_auto_examine": true,
00:33:05.618              "iobuf_small_cache_size": 128,
00:33:05.618              "iobuf_large_cache_size": 16
00:33:05.618            }
00:33:05.618          },
00:33:05.618          {
00:33:05.618            "method": "bdev_raid_set_options",
00:33:05.618            "params": {
00:33:05.618              "process_window_size_kb": 1024,
00:33:05.618              "process_max_bandwidth_mb_sec": 0
00:33:05.618            }
00:33:05.618          },
00:33:05.618          {
00:33:05.618            "method": "bdev_iscsi_set_options",
00:33:05.618            "params": {
00:33:05.618              "timeout_sec": 30
00:33:05.618            }
00:33:05.618          },
00:33:05.618          {
00:33:05.618            "method": "bdev_nvme_set_options",
00:33:05.618            "params": {
00:33:05.618              "action_on_timeout": "none",
00:33:05.618              "timeout_us": 0,
00:33:05.618              "timeout_admin_us": 0,
00:33:05.618              "keep_alive_timeout_ms": 10000,
00:33:05.618              "arbitration_burst": 0,
00:33:05.618              "low_priority_weight": 0,
00:33:05.618              "medium_priority_weight": 0,
00:33:05.618              "high_priority_weight": 0,
00:33:05.618              "nvme_adminq_poll_period_us": 10000,
00:33:05.618              "nvme_ioq_poll_period_us": 0,
00:33:05.618              "io_queue_requests": 512,
00:33:05.618              "delay_cmd_submit": true,
00:33:05.618              "transport_retry_count": 4,
00:33:05.618              "bdev_retry_count": 3,
00:33:05.618              "transport_ack_timeout": 0,
00:33:05.618              "ctrlr_loss_timeout_sec": 0,
00:33:05.618              "reconnect_delay_sec": 0,
00:33:05.618              "fast_io_fail_timeout_sec": 0,
00:33:05.618              "disable_auto_failback": false,
00:33:05.618              "generate_uuids": false,
00:33:05.618              "transport_tos": 0,
00:33:05.618              "nvme_error_stat": false,
00:33:05.618              "rdma_srq_size": 0,
00:33:05.618              "io_path_stat": false,
00:33:05.618              "allow_accel_sequence": false,
00:33:05.618              "rdma_max_cq_size": 0,
00:33:05.618              "rdma_cm_event_timeout_ms": 0,
00:33:05.618              "dhchap_digests": [
00:33:05.618                "sha256",
00:33:05.618                "sha384",
00:33:05.618                "sha512"
00:33:05.618              ],
00:33:05.618              "dhchap_dhgroups": [
00:33:05.618                "null",
00:33:05.618                "ffdhe2048",
00:33:05.618                "ffdhe3072",
00:33:05.618                "ffdhe4096",
00:33:05.618                "ffdhe6144",
00:33:05.618                "ffdhe8192"
00:33:05.618              ]
00:33:05.618            }
00:33:05.618          },
00:33:05.618          {
00:33:05.618            "method": "bdev_nvme_attach_controller",
00:33:05.618            "params": {
00:33:05.618              "name": "nvme0",
00:33:05.618              "trtype": "TCP",
00:33:05.618              "adrfam": "IPv4",
00:33:05.618              "traddr": "127.0.0.1",
00:33:05.618              "trsvcid": "4420",
00:33:05.618              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:05.618              "prchk_reftag": false,
00:33:05.618              "prchk_guard": false,
00:33:05.618              "ctrlr_loss_timeout_sec": 0,
00:33:05.618              "reconnect_delay_sec": 0,
00:33:05.619              "fast_io_fail_timeout_sec": 0,
00:33:05.619              "psk": "key0",
00:33:05.619              "hostnqn": "nqn.2016-06.io.spdk:host0",
00:33:05.619              "hdgst": false,
00:33:05.619              "ddgst": false,
00:33:05.619              "multipath": "multipath"
00:33:05.619            }
00:33:05.619          },
00:33:05.619          {
00:33:05.619            "method": "bdev_nvme_set_hotplug",
00:33:05.619            "params": {
00:33:05.619              "period_us": 100000,
00:33:05.619              "enable": false
00:33:05.619            }
00:33:05.619          },
00:33:05.619          {
00:33:05.619            "method": "bdev_wait_for_examine"
00:33:05.619          }
00:33:05.619        ]
00:33:05.619      },
00:33:05.619      {
00:33:05.619        "subsystem": "nbd",
00:33:05.619        "config": []
00:33:05.619      }
00:33:05.619    ]
00:33:05.619  }'
00:33:05.619   04:23:34 keyring_file -- keyring/file.sh@115 -- # killprocess 423494
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 423494 ']'
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@958 -- # kill -0 423494
00:33:05.619    04:23:34 keyring_file -- common/autotest_common.sh@959 -- # uname
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:05.619    04:23:34 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 423494
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 423494'
00:33:05.619  killing process with pid 423494
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@973 -- # kill 423494
00:33:05.619  Received shutdown signal, test time was about 1.000000 seconds
00:33:05.619  
00:33:05.619                                                                                                  Latency(us)
00:33:05.619  
[2024-12-09T03:23:34.195Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:05.619  
[2024-12-09T03:23:34.195Z]  ===================================================================================================================
00:33:05.619  
[2024-12-09T03:23:34.195Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:33:05.619   04:23:34 keyring_file -- common/autotest_common.sh@978 -- # wait 423494
00:33:05.877   04:23:34 keyring_file -- keyring/file.sh@118 -- # bperfpid=425012
00:33:05.877   04:23:34 keyring_file -- keyring/file.sh@120 -- # waitforlisten 425012 /var/tmp/bperf.sock
00:33:05.877   04:23:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 425012 ']'
00:33:05.877   04:23:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:33:05.877   04:23:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:05.877   04:23:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:33:05.877  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:33:05.877   04:23:34 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63
00:33:05.877   04:23:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:05.877    04:23:34 keyring_file -- keyring/file.sh@116 -- # echo '{
00:33:05.877    "subsystems": [
00:33:05.877      {
00:33:05.877        "subsystem": "keyring",
00:33:05.877        "config": [
00:33:05.877          {
00:33:05.877            "method": "keyring_file_add_key",
00:33:05.877            "params": {
00:33:05.877              "name": "key0",
00:33:05.877              "path": "/tmp/tmp.E1Trbdn2qB"
00:33:05.877            }
00:33:05.877          },
00:33:05.877          {
00:33:05.877            "method": "keyring_file_add_key",
00:33:05.877            "params": {
00:33:05.877              "name": "key1",
00:33:05.877              "path": "/tmp/tmp.8RZLZsSLHb"
00:33:05.877            }
00:33:05.877          }
00:33:05.877        ]
00:33:05.877      },
00:33:05.877      {
00:33:05.877        "subsystem": "iobuf",
00:33:05.877        "config": [
00:33:05.877          {
00:33:05.877            "method": "iobuf_set_options",
00:33:05.877            "params": {
00:33:05.877              "small_pool_count": 8192,
00:33:05.877              "large_pool_count": 1024,
00:33:05.877              "small_bufsize": 8192,
00:33:05.877              "large_bufsize": 135168,
00:33:05.877              "enable_numa": false
00:33:05.877            }
00:33:05.877          }
00:33:05.877        ]
00:33:05.877      },
00:33:05.877      {
00:33:05.877        "subsystem": "sock",
00:33:05.877        "config": [
00:33:05.877          {
00:33:05.877            "method": "sock_set_default_impl",
00:33:05.877            "params": {
00:33:05.877              "impl_name": "posix"
00:33:05.877            }
00:33:05.877          },
00:33:05.877          {
00:33:05.877            "method": "sock_impl_set_options",
00:33:05.877            "params": {
00:33:05.877              "impl_name": "ssl",
00:33:05.877              "recv_buf_size": 4096,
00:33:05.877              "send_buf_size": 4096,
00:33:05.877              "enable_recv_pipe": true,
00:33:05.877              "enable_quickack": false,
00:33:05.877              "enable_placement_id": 0,
00:33:05.877              "enable_zerocopy_send_server": true,
00:33:05.877              "enable_zerocopy_send_client": false,
00:33:05.877              "zerocopy_threshold": 0,
00:33:05.877              "tls_version": 0,
00:33:05.877              "enable_ktls": false
00:33:05.877            }
00:33:05.877          },
00:33:05.877          {
00:33:05.877            "method": "sock_impl_set_options",
00:33:05.877            "params": {
00:33:05.877              "impl_name": "posix",
00:33:05.877              "recv_buf_size": 2097152,
00:33:05.877              "send_buf_size": 2097152,
00:33:05.877              "enable_recv_pipe": true,
00:33:05.877              "enable_quickack": false,
00:33:05.877              "enable_placement_id": 0,
00:33:05.877              "enable_zerocopy_send_server": true,
00:33:05.877              "enable_zerocopy_send_client": false,
00:33:05.877              "zerocopy_threshold": 0,
00:33:05.877              "tls_version": 0,
00:33:05.877              "enable_ktls": false
00:33:05.877            }
00:33:05.877          }
00:33:05.877        ]
00:33:05.877      },
00:33:05.877      {
00:33:05.877        "subsystem": "vmd",
00:33:05.877        "config": []
00:33:05.877      },
00:33:05.877      {
00:33:05.877        "subsystem": "accel",
00:33:05.877        "config": [
00:33:05.877          {
00:33:05.877            "method": "accel_set_options",
00:33:05.877            "params": {
00:33:05.877              "small_cache_size": 128,
00:33:05.877              "large_cache_size": 16,
00:33:05.877              "task_count": 2048,
00:33:05.877              "sequence_count": 2048,
00:33:05.877              "buf_count": 2048
00:33:05.877            }
00:33:05.877          }
00:33:05.877        ]
00:33:05.877      },
00:33:05.877      {
00:33:05.877        "subsystem": "bdev",
00:33:05.877        "config": [
00:33:05.877          {
00:33:05.877            "method": "bdev_set_options",
00:33:05.877            "params": {
00:33:05.877              "bdev_io_pool_size": 65535,
00:33:05.877              "bdev_io_cache_size": 256,
00:33:05.877              "bdev_auto_examine": true,
00:33:05.878              "iobuf_small_cache_size": 128,
00:33:05.878              "iobuf_large_cache_size": 16
00:33:05.878            }
00:33:05.878          },
00:33:05.878          {
00:33:05.878            "method": "bdev_raid_set_options",
00:33:05.878            "params": {
00:33:05.878              "process_window_size_kb": 1024,
00:33:05.878              "process_max_bandwidth_mb_sec": 0
00:33:05.878            }
00:33:05.878          },
00:33:05.878          {
00:33:05.878            "method": "bdev_iscsi_set_options",
00:33:05.878            "params": {
00:33:05.878              "timeout_sec": 30
00:33:05.878            }
00:33:05.878          },
00:33:05.878          {
00:33:05.878            "method": "bdev_nvme_set_options",
00:33:05.878            "params": {
00:33:05.878              "action_on_timeout": "none",
00:33:05.878              "timeout_us": 0,
00:33:05.878              "timeout_admin_us": 0,
00:33:05.878              "keep_alive_timeout_ms": 10000,
00:33:05.878              "arbitration_burst": 0,
00:33:05.878              "low_priority_weight": 0,
00:33:05.878              "medium_priority_weight": 0,
00:33:05.878              "high_priority_weight": 0,
00:33:05.878              "nvme_adminq_poll_period_us": 10000,
00:33:05.878              "nvme_ioq_poll_period_us": 0,
00:33:05.878              "io_queue_requests": 512,
00:33:05.878              "delay_cmd_submit": true,
00:33:05.878              "transport_retry_count": 4,
00:33:05.878              "bdev_retry_count": 3,
00:33:05.878              "transport_ack_timeout": 0,
00:33:05.878              "ctrlr_loss_timeout_sec": 0,
00:33:05.878              "reconnect_delay_sec": 0,
00:33:05.878              "fast_io_fail_timeout_sec": 0,
00:33:05.878              "disable_auto_failback": false,
00:33:05.878              "generate_uuids": false,
00:33:05.878              "transport_tos": 0,
00:33:05.878              "nvme_error_stat": false,
00:33:05.878              "rdma_srq_size": 0,
00:33:05.878              "io_path_stat": false,
00:33:05.878              "allow_accel_sequence": false,
00:33:05.878              "rdma_max_cq_size": 0,
00:33:05.878              "rdma_cm_event_timeout_ms": 0,
00:33:05.878              "dhchap_digests": [
00:33:05.878                "sha256",
00:33:05.878                "sha384",
00:33:05.878                "sha512"
00:33:05.878              ],
00:33:05.878              "dhchap_dhgroups": [
00:33:05.878                "null",
00:33:05.878                "ffdhe2048",
00:33:05.878                "ffdhe3072",
00:33:05.878                "ffdhe4096",
00:33:05.878                "ffdhe6144",
00:33:05.878                "ffdhe8192"
00:33:05.878              ]
00:33:05.878            }
00:33:05.878          },
00:33:05.878          {
00:33:05.878            "method": "bdev_nvme_attach_controller",
00:33:05.878            "params": {
00:33:05.878              "name": "nvme0",
00:33:05.878              "trtype": "TCP",
00:33:05.878              "adrfam": "IPv4",
00:33:05.878              "traddr": "127.0.0.1",
00:33:05.878              "trsvcid": "4420",
00:33:05.878              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:05.878              "prchk_reftag": false,
00:33:05.878              "prchk_guard": false,
00:33:05.878              "ctrlr_loss_timeout_sec": 0,
00:33:05.878              "reconnect_delay_sec": 0,
00:33:05.878              "fast_io_fail_timeout_sec": 0,
00:33:05.878              "psk": "key0",
00:33:05.878              "hostnqn": "nqn.2016-06.io.spdk:host0",
00:33:05.878              "hdgst": false,
00:33:05.878              "ddgst": false,
00:33:05.878              "multipath": "multipath"
00:33:05.878            }
00:33:05.878          },
00:33:05.878          {
00:33:05.878            "method": "bdev_nvme_set_hotplug",
00:33:05.878            "params": {
00:33:05.878              "period_us": 100000,
00:33:05.878              "enable": false
00:33:05.878            }
00:33:05.878          },
00:33:05.878          {
00:33:05.878            "method": "bdev_wait_for_examine"
00:33:05.878          }
00:33:05.878        ]
00:33:05.878      },
00:33:05.878      {
00:33:05.878        "subsystem": "nbd",
00:33:05.878        "config": []
00:33:05.878      }
00:33:05.878    ]
00:33:05.878  }'
00:33:05.878   04:23:34 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:33:05.878  [2024-12-09 04:23:34.329618] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:33:05.878  [2024-12-09 04:23:34.329722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425012 ]
00:33:05.878  [2024-12-09 04:23:34.394091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:05.878  [2024-12-09 04:23:34.451842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:33:06.136  [2024-12-09 04:23:34.640485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:33:06.394   04:23:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:06.394   04:23:34 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:33:06.394    04:23:34 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys
00:33:06.394    04:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:06.394    04:23:34 keyring_file -- keyring/file.sh@121 -- # jq length
00:33:06.657   04:23:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 ))
00:33:06.657    04:23:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0
00:33:06.657    04:23:35 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:33:06.657    04:23:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:33:06.657    04:23:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:33:06.657    04:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:06.657    04:23:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:33:06.916   04:23:35 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 ))
00:33:06.916    04:23:35 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1
00:33:06.916    04:23:35 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:33:06.916    04:23:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:33:06.916    04:23:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:33:06.916    04:23:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:33:06.916    04:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:07.174   04:23:35 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 ))
00:33:07.174    04:23:35 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers
00:33:07.174    04:23:35 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name'
00:33:07.174    04:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers
00:33:07.431   04:23:35 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]]
00:33:07.431   04:23:35 keyring_file -- keyring/file.sh@1 -- # cleanup
00:33:07.431   04:23:35 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.E1Trbdn2qB /tmp/tmp.8RZLZsSLHb
00:33:07.431   04:23:35 keyring_file -- keyring/file.sh@20 -- # killprocess 425012
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 425012 ']'
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@958 -- # kill -0 425012
00:33:07.431    04:23:35 keyring_file -- common/autotest_common.sh@959 -- # uname
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:07.431    04:23:35 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425012
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425012'
00:33:07.431  killing process with pid 425012
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@973 -- # kill 425012
00:33:07.431  Received shutdown signal, test time was about 1.000000 seconds
00:33:07.431  
00:33:07.431                                                                                                  Latency(us)
00:33:07.431  
[2024-12-09T03:23:36.007Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:07.431  
[2024-12-09T03:23:36.007Z]  ===================================================================================================================
00:33:07.431  
[2024-12-09T03:23:36.007Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:33:07.431   04:23:35 keyring_file -- common/autotest_common.sh@978 -- # wait 425012
00:33:07.689   04:23:36 keyring_file -- keyring/file.sh@21 -- # killprocess 423473
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 423473 ']'
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 423473
00:33:07.689    04:23:36 keyring_file -- common/autotest_common.sh@959 -- # uname
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:07.689    04:23:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 423473
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 423473'
00:33:07.689  killing process with pid 423473
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@973 -- # kill 423473
00:33:07.689   04:23:36 keyring_file -- common/autotest_common.sh@978 -- # wait 423473
00:33:08.254  
00:33:08.254  real	0m14.625s
00:33:08.254  user	0m37.111s
00:33:08.254  sys	0m3.279s
00:33:08.254   04:23:36 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:08.254   04:23:36 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:33:08.254  ************************************
00:33:08.254  END TEST keyring_file
00:33:08.254  ************************************
00:33:08.254   04:23:36  -- spdk/autotest.sh@293 -- # [[ y == y ]]
00:33:08.254   04:23:36  -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh
00:33:08.254   04:23:36  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:33:08.254   04:23:36  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:08.254   04:23:36  -- common/autotest_common.sh@10 -- # set +x
00:33:08.254  ************************************
00:33:08.254  START TEST keyring_linux
00:33:08.254  ************************************
00:33:08.254   04:23:36 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh
00:33:08.254  Joined session keyring: 383726194
00:33:08.254  * Looking for test storage...
00:33:08.254  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring
00:33:08.254    04:23:36 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:08.254     04:23:36 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version
00:33:08.254     04:23:36 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:08.254    04:23:36 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@336 -- # IFS=.-:
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@337 -- # IFS=.-:
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@338 -- # local 'op=<'
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@344 -- # case "$op" in
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@345 -- # : 1
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@365 -- # decimal 1
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@353 -- # local d=1
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@355 -- # echo 1
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@366 -- # decimal 2
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@353 -- # local d=2
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:08.254     04:23:36 keyring_linux -- scripts/common.sh@355 -- # echo 2
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:08.254    04:23:36 keyring_linux -- scripts/common.sh@368 -- # return 0
00:33:08.254    04:23:36 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:08.254    04:23:36 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:08.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:08.254  		--rc genhtml_branch_coverage=1
00:33:08.254  		--rc genhtml_function_coverage=1
00:33:08.254  		--rc genhtml_legend=1
00:33:08.254  		--rc geninfo_all_blocks=1
00:33:08.254  		--rc geninfo_unexecuted_blocks=1
00:33:08.254  		
00:33:08.254  		'
00:33:08.254    04:23:36 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:08.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:08.254  		--rc genhtml_branch_coverage=1
00:33:08.254  		--rc genhtml_function_coverage=1
00:33:08.254  		--rc genhtml_legend=1
00:33:08.254  		--rc geninfo_all_blocks=1
00:33:08.254  		--rc geninfo_unexecuted_blocks=1
00:33:08.254  		
00:33:08.254  		'
00:33:08.254    04:23:36 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:08.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:08.254  		--rc genhtml_branch_coverage=1
00:33:08.254  		--rc genhtml_function_coverage=1
00:33:08.254  		--rc genhtml_legend=1
00:33:08.254  		--rc geninfo_all_blocks=1
00:33:08.254  		--rc geninfo_unexecuted_blocks=1
00:33:08.254  		
00:33:08.254  		'
00:33:08.254    04:23:36 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:08.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:08.254  		--rc genhtml_branch_coverage=1
00:33:08.254  		--rc genhtml_function_coverage=1
00:33:08.254  		--rc genhtml_legend=1
00:33:08.254  		--rc geninfo_all_blocks=1
00:33:08.254  		--rc geninfo_unexecuted_blocks=1
00:33:08.255  		
00:33:08.255  		'
00:33:08.255   04:23:36 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh
00:33:08.255    04:23:36 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:33:08.255      04:23:36 keyring_linux -- nvmf/common.sh@7 -- # uname -s
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:08.255      04:23:36 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:33:08.255      04:23:36 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob
00:33:08.255      04:23:36 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:08.255      04:23:36 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:08.255      04:23:36 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:08.255       04:23:36 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:08.255       04:23:36 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:08.255       04:23:36 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:08.255       04:23:36 keyring_linux -- paths/export.sh@5 -- # export PATH
00:33:08.255       04:23:36 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@51 -- # : 0
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:33:08.255  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:08.255     04:23:36 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:08.255    04:23:36 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock
00:33:08.255   04:23:36 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0
00:33:08.255   04:23:36 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:33:08.255   04:23:36 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff
00:33:08.255   04:23:36 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00
00:33:08.255   04:23:36 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT
00:33:08.255   04:23:36 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0
00:33:08.255   04:23:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path
00:33:08.255   04:23:36 keyring_linux -- keyring/common.sh@17 -- # name=key0
00:33:08.255   04:23:36 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:33:08.255   04:23:36 keyring_linux -- keyring/common.sh@17 -- # digest=0
00:33:08.255   04:23:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0
00:33:08.255   04:23:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:33:08.255   04:23:36 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:33:08.255   04:23:36 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest
00:33:08.255   04:23:36 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:33:08.255   04:23:36 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:33:08.255   04:23:36 keyring_linux -- nvmf/common.sh@732 -- # digest=0
00:33:08.255   04:23:36 keyring_linux -- nvmf/common.sh@733 -- # python -
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0
00:33:08.513  /tmp/:spdk-test:key0
00:33:08.513   04:23:36 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@17 -- # name=key1
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@17 -- # digest=0
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0
00:33:08.513   04:23:36 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0
00:33:08.513   04:23:36 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest
00:33:08.513   04:23:36 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:33:08.513   04:23:36 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00
00:33:08.513   04:23:36 keyring_linux -- nvmf/common.sh@732 -- # digest=0
00:33:08.513   04:23:36 keyring_linux -- nvmf/common.sh@733 -- # python -
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1
00:33:08.513   04:23:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1
00:33:08.513  /tmp/:spdk-test:key1
00:33:08.513   04:23:36 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=425437
00:33:08.513   04:23:36 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:33:08.513   04:23:36 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 425437
00:33:08.513   04:23:36 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 425437 ']'
00:33:08.513   04:23:36 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:08.513   04:23:36 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:08.513   04:23:36 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:08.513  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:08.513   04:23:36 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:08.513   04:23:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:33:08.513  [2024-12-09 04:23:36.955047] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:33:08.513  [2024-12-09 04:23:36.955168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425437 ]
00:33:08.513  [2024-12-09 04:23:37.021605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:08.513  [2024-12-09 04:23:37.080970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:33:09.078   04:23:37 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:09.078   04:23:37 keyring_linux -- common/autotest_common.sh@868 -- # return 0
00:33:09.078   04:23:37 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd
00:33:09.078   04:23:37 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:33:09.079  [2024-12-09 04:23:37.352895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:33:09.079  null0
00:33:09.079  [2024-12-09 04:23:37.384952] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:33:09.079  [2024-12-09 04:23:37.385512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:09.079   04:23:37 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s
00:33:09.079  562990168
00:33:09.079   04:23:37 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s
00:33:09.079  806872928
00:33:09.079   04:23:37 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=425442
00:33:09.079   04:23:37 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 425442 /var/tmp/bperf.sock
00:33:09.079   04:23:37 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 425442 ']'
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:33:09.079  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:09.079   04:23:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:33:09.079  [2024-12-09 04:23:37.459601] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization...
00:33:09.079  [2024-12-09 04:23:37.459683] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425442 ]
00:33:09.079  [2024-12-09 04:23:37.527488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:09.079  [2024-12-09 04:23:37.586111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:33:09.336   04:23:37 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:09.336   04:23:37 keyring_linux -- common/autotest_common.sh@868 -- # return 0
00:33:09.336   04:23:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable
00:33:09.336   04:23:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable
00:33:09.594   04:23:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init
00:33:09.594   04:23:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:33:09.852   04:23:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0
00:33:09.852   04:23:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0
00:33:10.111  [2024-12-09 04:23:38.562620] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:33:10.111  nvme0n1
00:33:10.111   04:23:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0
00:33:10.111   04:23:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0
00:33:10.111   04:23:38 keyring_linux -- keyring/linux.sh@20 -- # local sn
00:33:10.111    04:23:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys
00:33:10.111    04:23:38 keyring_linux -- keyring/linux.sh@22 -- # jq length
00:33:10.111    04:23:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:10.368   04:23:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count ))
00:33:10.369   04:23:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 ))
00:33:10.369    04:23:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0
00:33:10.369    04:23:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn
00:33:10.369    04:23:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:33:10.369    04:23:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")'
00:33:10.369    04:23:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:10.934   04:23:39 keyring_linux -- keyring/linux.sh@25 -- # sn=562990168
00:33:10.934    04:23:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0
00:33:10.934    04:23:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0
00:33:10.934   04:23:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 562990168 == \5\6\2\9\9\0\1\6\8 ]]
00:33:10.934    04:23:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 562990168
00:33:10.934   04:23:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]]
00:33:10.934   04:23:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:33:10.934  Running I/O for 1 seconds...
00:33:11.867      11054.00 IOPS,    43.18 MiB/s
00:33:11.867                                                                                                  Latency(us)
00:33:11.867  
[2024-12-09T03:23:40.443Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:11.867  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:33:11.867  	 nvme0n1             :       1.01   11055.53      43.19       0.00     0.00   11504.96    3859.34   14951.92
00:33:11.867  
[2024-12-09T03:23:40.443Z]  ===================================================================================================================
00:33:11.867  
[2024-12-09T03:23:40.443Z]  Total                       :              11055.53      43.19       0.00     0.00   11504.96    3859.34   14951.92
00:33:11.867  {
00:33:11.867    "results": [
00:33:11.867      {
00:33:11.867        "job": "nvme0n1",
00:33:11.867        "core_mask": "0x2",
00:33:11.867        "workload": "randread",
00:33:11.867        "status": "finished",
00:33:11.867        "queue_depth": 128,
00:33:11.867        "io_size": 4096,
00:33:11.867        "runtime": 1.01153,
00:33:11.867        "iops": 11055.529742073888,
00:33:11.867        "mibps": 43.185663054976125,
00:33:11.867        "io_failed": 0,
00:33:11.867        "io_timeout": 0,
00:33:11.867        "avg_latency_us": 11504.962538244226,
00:33:11.867        "min_latency_us": 3859.342222222222,
00:33:11.867        "max_latency_us": 14951.917037037038
00:33:11.867      }
00:33:11.867    ],
00:33:11.867    "core_count": 1
00:33:11.867  }
00:33:11.867   04:23:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:33:11.867   04:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:33:12.124   04:23:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0
00:33:12.124   04:23:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name=
00:33:12.124   04:23:40 keyring_linux -- keyring/linux.sh@20 -- # local sn
00:33:12.124    04:23:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys
00:33:12.124    04:23:40 keyring_linux -- keyring/linux.sh@22 -- # jq length
00:33:12.124    04:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:33:12.381   04:23:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count ))
00:33:12.381   04:23:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 ))
00:33:12.381   04:23:40 keyring_linux -- keyring/linux.sh@23 -- # return
00:33:12.381   04:23:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:33:12.381   04:23:40 keyring_linux -- common/autotest_common.sh@652 -- # local es=0
00:33:12.381   04:23:40 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:33:12.381   04:23:40 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:33:12.381   04:23:40 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:12.381    04:23:40 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:33:12.381   04:23:40 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:12.381   04:23:40 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:33:12.381   04:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:33:12.638  [2024-12-09 04:23:41.179526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:33:12.638  [2024-12-09 04:23:41.180455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715f20 (107): Transport endpoint is not connected
00:33:12.638  [2024-12-09 04:23:41.181447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715f20 (9): Bad file descriptor
00:33:12.638  [2024-12-09 04:23:41.182446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state
00:33:12.638  [2024-12-09 04:23:41.182465] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1
00:33:12.638  [2024-12-09 04:23:41.182479] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted
00:33:12.638  [2024-12-09 04:23:41.182493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state.
00:33:12.638  request:
00:33:12.638  {
00:33:12.638    "name": "nvme0",
00:33:12.638    "trtype": "tcp",
00:33:12.638    "traddr": "127.0.0.1",
00:33:12.638    "adrfam": "ipv4",
00:33:12.638    "trsvcid": "4420",
00:33:12.638    "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:12.638    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:33:12.638    "prchk_reftag": false,
00:33:12.638    "prchk_guard": false,
00:33:12.638    "hdgst": false,
00:33:12.638    "ddgst": false,
00:33:12.638    "psk": ":spdk-test:key1",
00:33:12.638    "allow_unrecognized_csi": false,
00:33:12.638    "method": "bdev_nvme_attach_controller",
00:33:12.638    "req_id": 1
00:33:12.638  }
00:33:12.638  Got JSON-RPC error response
00:33:12.638  response:
00:33:12.638  {
00:33:12.638    "code": -5,
00:33:12.638    "message": "Input/output error"
00:33:12.638  }
00:33:12.638   04:23:41 keyring_linux -- common/autotest_common.sh@655 -- # es=1
00:33:12.638   04:23:41 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:33:12.638   04:23:41 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:33:12.638   04:23:41 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn
00:33:12.638    04:23:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0
00:33:12.638    04:23:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@33 -- # sn=562990168
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 562990168
00:33:12.638  1 links removed
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1
00:33:12.638   04:23:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn
00:33:12.638    04:23:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1
00:33:12.638    04:23:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1
00:33:12.896   04:23:41 keyring_linux -- keyring/linux.sh@33 -- # sn=806872928
00:33:12.896   04:23:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 806872928
00:33:12.896  1 links removed
00:33:12.896   04:23:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 425442
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 425442 ']'
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 425442
00:33:12.896    04:23:41 keyring_linux -- common/autotest_common.sh@959 -- # uname
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:12.896    04:23:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425442
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425442'
00:33:12.896  killing process with pid 425442
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 425442
00:33:12.896  Received shutdown signal, test time was about 1.000000 seconds
00:33:12.896  
00:33:12.896                                                                                                  Latency(us)
00:33:12.896  
[2024-12-09T03:23:41.472Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:12.896  
[2024-12-09T03:23:41.472Z]  ===================================================================================================================
00:33:12.896  
[2024-12-09T03:23:41.472Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:33:12.896   04:23:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 425442
00:33:13.154   04:23:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 425437
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 425437 ']'
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 425437
00:33:13.154    04:23:41 keyring_linux -- common/autotest_common.sh@959 -- # uname
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:13.154    04:23:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425437
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425437'
00:33:13.154  killing process with pid 425437
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 425437
00:33:13.154   04:23:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 425437
00:33:13.411  
00:33:13.411  real	0m5.306s
00:33:13.411  user	0m10.504s
00:33:13.411  sys	0m1.637s
00:33:13.411   04:23:41 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:13.411   04:23:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:33:13.411  ************************************
00:33:13.411  END TEST keyring_linux
00:33:13.411  ************************************
00:33:13.411   04:23:41  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:33:13.412   04:23:41  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:33:13.412   04:23:41  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:33:13.412   04:23:41  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:33:13.412   04:23:41  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:33:13.412   04:23:41  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:33:13.412   04:23:41  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:33:13.412   04:23:41  -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:13.412   04:23:41  -- common/autotest_common.sh@10 -- # set +x
00:33:13.412   04:23:41  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:33:13.412   04:23:41  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:33:13.412   04:23:41  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:33:13.412   04:23:41  -- common/autotest_common.sh@10 -- # set +x
00:33:15.312  INFO: APP EXITING
00:33:15.312  INFO: killing all VMs
00:33:15.312  INFO: killing vhost app
00:33:15.312  INFO: EXIT DONE
00:33:16.691  0000:88:00.0 (8086 0a54): Already using the nvme driver
00:33:16.691  0000:00:04.7 (8086 0e27): Already using the ioatdma driver
00:33:16.691  0000:00:04.6 (8086 0e26): Already using the ioatdma driver
00:33:16.691  0000:00:04.5 (8086 0e25): Already using the ioatdma driver
00:33:16.691  0000:00:04.4 (8086 0e24): Already using the ioatdma driver
00:33:16.691  0000:00:04.3 (8086 0e23): Already using the ioatdma driver
00:33:16.691  0000:00:04.2 (8086 0e22): Already using the ioatdma driver
00:33:16.691  0000:00:04.1 (8086 0e21): Already using the ioatdma driver
00:33:16.691  0000:00:04.0 (8086 0e20): Already using the ioatdma driver
00:33:16.691  0000:80:04.7 (8086 0e27): Already using the ioatdma driver
00:33:16.691  0000:80:04.6 (8086 0e26): Already using the ioatdma driver
00:33:16.691  0000:80:04.5 (8086 0e25): Already using the ioatdma driver
00:33:16.691  0000:80:04.4 (8086 0e24): Already using the ioatdma driver
00:33:16.691  0000:80:04.3 (8086 0e23): Already using the ioatdma driver
00:33:16.691  0000:80:04.2 (8086 0e22): Already using the ioatdma driver
00:33:16.691  0000:80:04.1 (8086 0e21): Already using the ioatdma driver
00:33:16.691  0000:80:04.0 (8086 0e20): Already using the ioatdma driver
00:33:18.067  Cleaning
00:33:18.067  Removing:    /var/run/dpdk/spdk0/config
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3
00:33:18.067  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:33:18.067  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:33:18.067  Removing:    /var/run/dpdk/spdk1/config
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3
00:33:18.067  Removing:    /var/run/dpdk/spdk1/fbarray_memzone
00:33:18.067  Removing:    /var/run/dpdk/spdk1/hugepage_info
00:33:18.067  Removing:    /var/run/dpdk/spdk2/config
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3
00:33:18.067  Removing:    /var/run/dpdk/spdk2/fbarray_memzone
00:33:18.067  Removing:    /var/run/dpdk/spdk2/hugepage_info
00:33:18.067  Removing:    /var/run/dpdk/spdk3/config
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3
00:33:18.067  Removing:    /var/run/dpdk/spdk3/fbarray_memzone
00:33:18.067  Removing:    /var/run/dpdk/spdk3/hugepage_info
00:33:18.067  Removing:    /var/run/dpdk/spdk4/config
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3
00:33:18.067  Removing:    /var/run/dpdk/spdk4/fbarray_memzone
00:33:18.067  Removing:    /var/run/dpdk/spdk4/hugepage_info
00:33:18.067  Removing:    /dev/shm/bdev_svc_trace.1
00:33:18.067  Removing:    /dev/shm/nvmf_trace.0
00:33:18.067  Removing:    /dev/shm/spdk_tgt_trace.pid102971
00:33:18.067  Removing:    /var/run/dpdk/spdk0
00:33:18.067  Removing:    /var/run/dpdk/spdk1
00:33:18.067  Removing:    /var/run/dpdk/spdk2
00:33:18.067  Removing:    /var/run/dpdk/spdk3
00:33:18.067  Removing:    /var/run/dpdk/spdk4
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid101286
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid102028
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid102971
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid103309
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid103984
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid104124
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid104842
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid104967
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid105227
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid106431
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid107360
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid107675
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid107868
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid108105
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid108400
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid108557
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid108709
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid108901
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid109211
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid111591
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid111867
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid112027
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid112041
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid112431
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid112477
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid112855
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid112913
00:33:18.067  Removing:    /var/run/dpdk/spdk_pid113086
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid113211
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid113381
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid113392
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid113889
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid114041
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid114250
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid116503
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid119139
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid126149
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid126672
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid129187
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid129464
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid132509
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid136335
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid138531
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid144897
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid150189
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid151466
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid152172
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid162553
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid164856
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid193074
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid196371
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid200183
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid204512
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid204524
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid205231
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid206283
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid206940
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid207341
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid207343
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid207599
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid207739
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid207746
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid208404
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid208940
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid209596
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid210004
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid210006
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid210264
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid211203
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid212010
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid217228
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid245803
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid248762
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid249828
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid251146
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid251288
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid251428
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid251569
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid252126
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid253428
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid254298
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid254888
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid256850
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid257241
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid257706
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid260093
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid263387
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid263388
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid263389
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid265620
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid270599
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid273372
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid277168
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid278107
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid279205
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid280280
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid283048
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid285511
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid287877
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid292112
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid292197
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid295649
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid295899
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid296038
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid296307
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid296312
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid299082
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid299417
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid302091
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid304067
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid307491
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid310953
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid317449
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid321940
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid321975
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid335311
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid335836
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid336253
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid336656
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid337238
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid337707
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid338184
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid338599
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid341123
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid341384
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid345179
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid345231
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid348596
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid351207
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid358131
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid358544
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid361052
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid361202
00:33:18.325  Removing:    /var/run/dpdk/spdk_pid364341
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid368151
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid370299
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid376669
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid381876
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid383058
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid383778
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid393900
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid396155
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid398224
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid403835
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid403843
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid406747
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid408158
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid409668
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid410405
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid411823
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid412585
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid418032
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid418379
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid418771
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid420341
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid420741
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid421101
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid423473
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid423494
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid425012
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid425437
00:33:18.583  Removing:    /var/run/dpdk/spdk_pid425442
00:33:18.583  Clean
00:33:18.583   04:23:47  -- common/autotest_common.sh@1453 -- # return 0
00:33:18.583   04:23:47  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:33:18.583   04:23:47  -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:18.583   04:23:47  -- common/autotest_common.sh@10 -- # set +x
00:33:18.583   04:23:47  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:33:18.583   04:23:47  -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:18.583   04:23:47  -- common/autotest_common.sh@10 -- # set +x
00:33:18.583   04:23:47  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt
00:33:18.583   04:23:47  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]]
00:33:18.583   04:23:47  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log
00:33:18.583   04:23:47  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:33:18.583    04:23:47  -- spdk/autotest.sh@398 -- # hostname
00:33:18.583   04:23:47  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info
00:33:18.840  geninfo: WARNING: invalid characters removed from testname!
00:33:50.917   04:24:17  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:33:54.199   04:24:22  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:33:56.727   04:24:25  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:34:00.002   04:24:28  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:34:03.281   04:24:31  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:34:05.818   04:24:34  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:34:09.099   04:24:37  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:34:09.099   04:24:37  -- spdk/autorun.sh@1 -- $ timing_finish
00:34:09.099   04:24:37  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]]
00:34:09.099   04:24:37  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:34:09.099   04:24:37  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:34:09.099   04:24:37  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt
00:34:09.099  + [[ -n 30090 ]]
00:34:09.099  + sudo kill 30090
00:34:09.109  [Pipeline] }
00:34:09.125  [Pipeline] // stage
00:34:09.130  [Pipeline] }
00:34:09.144  [Pipeline] // timeout
00:34:09.151  [Pipeline] }
00:34:09.166  [Pipeline] // catchError
00:34:09.171  [Pipeline] }
00:34:09.188  [Pipeline] // wrap
00:34:09.194  [Pipeline] }
00:34:09.208  [Pipeline] // catchError
00:34:09.219  [Pipeline] stage
00:34:09.222  [Pipeline] { (Epilogue)
00:34:09.236  [Pipeline] catchError
00:34:09.237  [Pipeline] {
00:34:09.249  [Pipeline] echo
00:34:09.250  Cleanup processes
00:34:09.255  [Pipeline] sh
00:34:09.540  + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:34:09.540  436779 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:34:09.555  [Pipeline] sh
00:34:09.855  ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:34:09.855  ++ grep -v 'sudo pgrep'
00:34:09.855  ++ awk '{print $1}'
00:34:09.855  + sudo kill -9
00:34:09.855  + true
00:34:09.869  [Pipeline] sh
00:34:10.156  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:34:20.150  [Pipeline] sh
00:34:20.439  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:34:20.439  Artifacts sizes are good
00:34:20.456  [Pipeline] archiveArtifacts
00:34:20.464  Archiving artifacts
00:34:21.016  [Pipeline] sh
00:34:21.301  + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:34:21.318  [Pipeline] cleanWs
00:34:21.328  [WS-CLEANUP] Deleting project workspace...
00:34:21.328  [WS-CLEANUP] Deferred wipeout is used...
00:34:21.336  [WS-CLEANUP] done
00:34:21.338  [Pipeline] }
00:34:21.355  [Pipeline] // catchError
00:34:21.368  [Pipeline] sh
00:34:21.652  + logger -p user.info -t JENKINS-CI
00:34:21.661  [Pipeline] }
00:34:21.678  [Pipeline] // stage
00:34:21.684  [Pipeline] }
00:34:21.700  [Pipeline] // node
00:34:21.708  [Pipeline] End of Pipeline
00:34:21.749  Finished: SUCCESS